I0823 08:51:32.047395 6 e2e.go:224] Starting e2e run "d7320f36-e51d-11ea-87d5-0242ac11000a" on Ginkgo node 1 Running Suite: Kubernetes e2e suite =================================== Random Seed: 1598172691 - Will randomize all specs Will run 201 of 2164 specs Aug 23 08:51:32.231: INFO: >>> kubeConfig: /root/.kube/config Aug 23 08:51:32.235: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Aug 23 08:51:32.248: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Aug 23 08:51:32.279: INFO: 12 / 12 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Aug 23 08:51:32.279: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Aug 23 08:51:32.279: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Aug 23 08:51:32.288: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) Aug 23 08:51:32.288: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Aug 23 08:51:32.288: INFO: e2e test version: v1.13.12 Aug 23 08:51:32.289: INFO: kube-apiserver version: v1.13.12 [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 23 08:51:32.289: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap Aug 23 08:51:32.394: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-d7b48f1e-e51d-11ea-87d5-0242ac11000a STEP: Creating a pod to test consume configMaps Aug 23 08:51:32.404: INFO: Waiting up to 5m0s for pod "pod-configmaps-d7b503c5-e51d-11ea-87d5-0242ac11000a" in namespace "e2e-tests-configmap-tm92m" to be "success or failure" Aug 23 08:51:32.406: INFO: Pod "pod-configmaps-d7b503c5-e51d-11ea-87d5-0242ac11000a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028423ms Aug 23 08:51:34.410: INFO: Pod "pod-configmaps-d7b503c5-e51d-11ea-87d5-0242ac11000a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005214002s Aug 23 08:51:36.414: INFO: Pod "pod-configmaps-d7b503c5-e51d-11ea-87d5-0242ac11000a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.009365655s Aug 23 08:51:38.417: INFO: Pod "pod-configmaps-d7b503c5-e51d-11ea-87d5-0242ac11000a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.012772529s STEP: Saw pod success Aug 23 08:51:38.417: INFO: Pod "pod-configmaps-d7b503c5-e51d-11ea-87d5-0242ac11000a" satisfied condition "success or failure" Aug 23 08:51:38.420: INFO: Trying to get logs from node hunter-worker2 pod pod-configmaps-d7b503c5-e51d-11ea-87d5-0242ac11000a container configmap-volume-test: STEP: delete the pod Aug 23 08:51:38.468: INFO: Waiting for pod pod-configmaps-d7b503c5-e51d-11ea-87d5-0242ac11000a to disappear Aug 23 08:51:38.625: INFO: Pod pod-configmaps-d7b503c5-e51d-11ea-87d5-0242ac11000a no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 23 08:51:38.625: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-tm92m" for this suite. Aug 23 08:51:44.649: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 23 08:51:44.710: INFO: namespace: e2e-tests-configmap-tm92m, resource: bindings, ignored listing per whitelist Aug 23 08:51:44.719: INFO: namespace e2e-tests-configmap-tm92m deletion completed in 6.088208444s • [SLOW TEST:12.429 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 23 08:51:44.719: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpa': should get the expected 'State' STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpof': should get the expected 'State' STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpn': should get the expected 'State' STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance] [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 23 08:52:24.098: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-runtime-4zmrf" for this suite. Aug 23 08:52:32.188: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 23 08:52:32.394: INFO: namespace: e2e-tests-container-runtime-4zmrf, resource: bindings, ignored listing per whitelist Aug 23 08:52:32.440: INFO: namespace e2e-tests-container-runtime-4zmrf deletion completed in 8.339397538s • [SLOW TEST:47.720 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:37 when starting a container that exits /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 23 08:52:32.440: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with configmap pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod pod-subpath-test-configmap-zltx STEP: Creating a pod to test atomic-volume-subpath Aug 23 08:52:32.596: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-zltx" in namespace "e2e-tests-subpath-lpq9r" to be "success or failure" Aug 23 08:52:32.601: INFO: Pod "pod-subpath-test-configmap-zltx": Phase="Pending", Reason="", readiness=false. Elapsed: 4.7448ms Aug 23 08:52:34.604: INFO: Pod "pod-subpath-test-configmap-zltx": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008297803s Aug 23 08:52:36.656: INFO: Pod "pod-subpath-test-configmap-zltx": Phase="Pending", Reason="", readiness=false. Elapsed: 4.059626357s Aug 23 08:52:38.709: INFO: Pod "pod-subpath-test-configmap-zltx": Phase="Pending", Reason="", readiness=false. Elapsed: 6.113384664s Aug 23 08:52:40.713: INFO: Pod "pod-subpath-test-configmap-zltx": Phase="Pending", Reason="", readiness=false. Elapsed: 8.117198527s Aug 23 08:52:42.716: INFO: Pod "pod-subpath-test-configmap-zltx": Phase="Running", Reason="", readiness=false. Elapsed: 10.120377467s Aug 23 08:52:44.721: INFO: Pod "pod-subpath-test-configmap-zltx": Phase="Running", Reason="", readiness=false. Elapsed: 12.124646017s Aug 23 08:52:46.724: INFO: Pod "pod-subpath-test-configmap-zltx": Phase="Running", Reason="", readiness=false. Elapsed: 14.127956117s Aug 23 08:52:48.729: INFO: Pod "pod-subpath-test-configmap-zltx": Phase="Running", Reason="", readiness=false. Elapsed: 16.132812854s Aug 23 08:52:50.732: INFO: Pod "pod-subpath-test-configmap-zltx": Phase="Running", Reason="", readiness=false. Elapsed: 18.135930173s Aug 23 08:52:52.740: INFO: Pod "pod-subpath-test-configmap-zltx": Phase="Running", Reason="", readiness=false. Elapsed: 20.14339375s Aug 23 08:52:54.788: INFO: Pod "pod-subpath-test-configmap-zltx": Phase="Running", Reason="", readiness=false. Elapsed: 22.191447585s Aug 23 08:52:57.075: INFO: Pod "pod-subpath-test-configmap-zltx": Phase="Running", Reason="", readiness=false. Elapsed: 24.479380869s Aug 23 08:52:59.129: INFO: Pod "pod-subpath-test-configmap-zltx": Phase="Running", Reason="", readiness=false. Elapsed: 26.533037522s Aug 23 08:53:01.133: INFO: Pod "pod-subpath-test-configmap-zltx": Phase="Succeeded", Reason="", readiness=false. Elapsed: 28.536696716s STEP: Saw pod success Aug 23 08:53:01.133: INFO: Pod "pod-subpath-test-configmap-zltx" satisfied condition "success or failure" Aug 23 08:53:01.135: INFO: Trying to get logs from node hunter-worker2 pod pod-subpath-test-configmap-zltx container test-container-subpath-configmap-zltx: STEP: delete the pod Aug 23 08:53:01.167: INFO: Waiting for pod pod-subpath-test-configmap-zltx to disappear Aug 23 08:53:01.303: INFO: Pod pod-subpath-test-configmap-zltx no longer exists STEP: Deleting pod pod-subpath-test-configmap-zltx Aug 23 08:53:01.303: INFO: Deleting pod "pod-subpath-test-configmap-zltx" in namespace "e2e-tests-subpath-lpq9r" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 23 08:53:01.306: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-subpath-lpq9r" for this suite. Aug 23 08:53:07.812: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 23 08:53:08.060: INFO: namespace: e2e-tests-subpath-lpq9r, resource: bindings, ignored listing per whitelist Aug 23 08:53:08.118: INFO: namespace e2e-tests-subpath-lpq9r deletion completed in 6.809056328s • [SLOW TEST:35.678 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with configmap pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 23 08:53:08.118: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test override arguments Aug 23 08:53:08.746: INFO: Waiting up to 5m0s for pod "client-containers-11090969-e51e-11ea-87d5-0242ac11000a" in namespace "e2e-tests-containers-z7ssc" to be "success or failure" Aug 23 08:53:08.967: INFO: Pod "client-containers-11090969-e51e-11ea-87d5-0242ac11000a": Phase="Pending", Reason="", readiness=false. Elapsed: 221.103093ms Aug 23 08:53:10.969: INFO: Pod "client-containers-11090969-e51e-11ea-87d5-0242ac11000a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.223592997s Aug 23 08:53:12.973: INFO: Pod "client-containers-11090969-e51e-11ea-87d5-0242ac11000a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.227363704s Aug 23 08:53:15.033: INFO: Pod "client-containers-11090969-e51e-11ea-87d5-0242ac11000a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.287164889s STEP: Saw pod success Aug 23 08:53:15.033: INFO: Pod "client-containers-11090969-e51e-11ea-87d5-0242ac11000a" satisfied condition "success or failure" Aug 23 08:53:15.402: INFO: Trying to get logs from node hunter-worker2 pod client-containers-11090969-e51e-11ea-87d5-0242ac11000a container test-container: STEP: delete the pod Aug 23 08:53:16.304: INFO: Waiting for pod client-containers-11090969-e51e-11ea-87d5-0242ac11000a to disappear Aug 23 08:53:16.597: INFO: Pod client-containers-11090969-e51e-11ea-87d5-0242ac11000a no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 23 08:53:16.597: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-containers-z7ssc" for this suite. Aug 23 08:53:23.918: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 23 08:53:23.954: INFO: namespace: e2e-tests-containers-z7ssc, resource: bindings, ignored listing per whitelist Aug 23 08:53:24.004: INFO: namespace e2e-tests-containers-z7ssc deletion completed in 7.064900392s • [SLOW TEST:15.885 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 23 08:53:24.004: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating replication controller my-hostname-basic-1a488b6d-e51e-11ea-87d5-0242ac11000a Aug 23 08:53:24.117: INFO: Pod name my-hostname-basic-1a488b6d-e51e-11ea-87d5-0242ac11000a: Found 0 pods out of 1 Aug 23 08:53:29.122: INFO: Pod name my-hostname-basic-1a488b6d-e51e-11ea-87d5-0242ac11000a: Found 1 pods out of 1 Aug 23 08:53:29.122: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-1a488b6d-e51e-11ea-87d5-0242ac11000a" are running Aug 23 08:53:29.125: INFO: Pod "my-hostname-basic-1a488b6d-e51e-11ea-87d5-0242ac11000a-4kz8d" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-08-23 08:53:24 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-08-23 08:53:27 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-08-23 08:53:27 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-08-23 08:53:24 +0000 UTC Reason: Message:}]) Aug 23 08:53:29.125: INFO: Trying to dial the pod Aug 23 08:53:34.137: INFO: Controller my-hostname-basic-1a488b6d-e51e-11ea-87d5-0242ac11000a: Got expected result from replica 1 [my-hostname-basic-1a488b6d-e51e-11ea-87d5-0242ac11000a-4kz8d]: "my-hostname-basic-1a488b6d-e51e-11ea-87d5-0242ac11000a-4kz8d", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 23 08:53:34.137: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-replication-controller-hrzlh" for this suite. Aug 23 08:53:40.190: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 23 08:53:40.303: INFO: namespace: e2e-tests-replication-controller-hrzlh, resource: bindings, ignored listing per whitelist Aug 23 08:53:40.335: INFO: namespace e2e-tests-replication-controller-hrzlh deletion completed in 6.193552124s • [SLOW TEST:16.331 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 23 08:53:40.335: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43 [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod Aug 23 08:53:40.474: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 23 08:53:52.350: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-init-container-cwjr7" for this suite. Aug 23 08:53:58.409: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 23 08:53:58.487: INFO: namespace: e2e-tests-init-container-cwjr7, resource: bindings, ignored listing per whitelist Aug 23 08:53:58.500: INFO: namespace e2e-tests-init-container-cwjr7 deletion completed in 6.106511901s • [SLOW TEST:18.165 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 23 08:53:58.501: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Aug 23 08:54:22.887: INFO: Container started at 2020-08-23 08:54:03 +0000 UTC, pod became ready at 2020-08-23 08:54:21 +0000 UTC [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 23 08:54:22.887: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-cdn6b" for this suite. Aug 23 08:54:50.919: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 23 08:54:50.970: INFO: namespace: e2e-tests-container-probe-cdn6b, resource: bindings, ignored listing per whitelist Aug 23 08:54:50.977: INFO: namespace e2e-tests-container-probe-cdn6b deletion completed in 28.085996611s • [SLOW TEST:52.476 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 23 08:54:50.977: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0644 on tmpfs Aug 23 08:54:51.166: INFO: Waiting up to 5m0s for pod "pod-4e2b7269-e51e-11ea-87d5-0242ac11000a" in namespace "e2e-tests-emptydir-9w7kg" to be "success or failure" Aug 23 08:54:51.238: INFO: Pod "pod-4e2b7269-e51e-11ea-87d5-0242ac11000a": Phase="Pending", Reason="", readiness=false. Elapsed: 71.232685ms Aug 23 08:54:53.499: INFO: Pod "pod-4e2b7269-e51e-11ea-87d5-0242ac11000a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.332323808s Aug 23 08:54:55.501: INFO: Pod "pod-4e2b7269-e51e-11ea-87d5-0242ac11000a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.335155361s Aug 23 08:54:57.505: INFO: Pod "pod-4e2b7269-e51e-11ea-87d5-0242ac11000a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.338801933s STEP: Saw pod success Aug 23 08:54:57.505: INFO: Pod "pod-4e2b7269-e51e-11ea-87d5-0242ac11000a" satisfied condition "success or failure" Aug 23 08:54:57.508: INFO: Trying to get logs from node hunter-worker pod pod-4e2b7269-e51e-11ea-87d5-0242ac11000a container test-container: STEP: delete the pod Aug 23 08:54:57.577: INFO: Waiting for pod pod-4e2b7269-e51e-11ea-87d5-0242ac11000a to disappear Aug 23 08:54:57.741: INFO: Pod pod-4e2b7269-e51e-11ea-87d5-0242ac11000a no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 23 08:54:57.741: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-9w7kg" for this suite. Aug 23 08:55:04.274: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 23 08:55:04.351: INFO: namespace: e2e-tests-emptydir-9w7kg, resource: bindings, ignored listing per whitelist Aug 23 08:55:04.353: INFO: namespace e2e-tests-emptydir-9w7kg deletion completed in 6.604948559s • [SLOW TEST:13.376 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl version should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 23 08:55:04.353: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Aug 23 08:55:04.535: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version' Aug 23 08:55:04.753: INFO: stderr: "" Aug 23 08:55:04.753: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"13\", GitVersion:\"v1.13.12\", GitCommit:\"a8b52209ee172232b6db7a6e0ce2adc77458829f\", GitTreeState:\"clean\", BuildDate:\"2020-08-23T03:25:46Z\", GoVersion:\"go1.11.13\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"13\", GitVersion:\"v1.13.12\", GitCommit:\"a8b52209ee172232b6db7a6e0ce2adc77458829f\", GitTreeState:\"clean\", BuildDate:\"2020-05-01T02:50:51Z\", GoVersion:\"go1.11.13\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 23 08:55:04.753: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-rf2rx" for this suite. Aug 23 08:55:12.782: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 23 08:55:12.813: INFO: namespace: e2e-tests-kubectl-rf2rx, resource: bindings, ignored listing per whitelist Aug 23 08:55:12.847: INFO: namespace e2e-tests-kubectl-rf2rx deletion completed in 8.086079445s • [SLOW TEST:8.493 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl version /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 23 08:55:12.847: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: getting the auto-created API token STEP: Creating a pod to test consume service account token Aug 23 08:55:13.501: INFO: Waiting up to 5m0s for pod "pod-service-account-5b7dcb94-e51e-11ea-87d5-0242ac11000a-dq6mf" in namespace "e2e-tests-svcaccounts-lnppc" to be "success or failure" Aug 23 08:55:13.517: INFO: Pod "pod-service-account-5b7dcb94-e51e-11ea-87d5-0242ac11000a-dq6mf": Phase="Pending", Reason="", readiness=false. Elapsed: 16.241267ms Aug 23 08:55:15.621: INFO: Pod "pod-service-account-5b7dcb94-e51e-11ea-87d5-0242ac11000a-dq6mf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.120433038s Aug 23 08:55:17.675: INFO: Pod "pod-service-account-5b7dcb94-e51e-11ea-87d5-0242ac11000a-dq6mf": Phase="Pending", Reason="", readiness=false. Elapsed: 4.173994913s Aug 23 08:55:19.693: INFO: Pod "pod-service-account-5b7dcb94-e51e-11ea-87d5-0242ac11000a-dq6mf": Phase="Pending", Reason="", readiness=false. Elapsed: 6.191727069s Aug 23 08:55:21.696: INFO: Pod "pod-service-account-5b7dcb94-e51e-11ea-87d5-0242ac11000a-dq6mf": Phase="Pending", Reason="", readiness=false. Elapsed: 8.19472136s Aug 23 08:55:23.727: INFO: Pod "pod-service-account-5b7dcb94-e51e-11ea-87d5-0242ac11000a-dq6mf": Phase="Running", Reason="", readiness=false. Elapsed: 10.22648465s Aug 23 08:55:26.028: INFO: Pod "pod-service-account-5b7dcb94-e51e-11ea-87d5-0242ac11000a-dq6mf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.527300853s STEP: Saw pod success Aug 23 08:55:26.028: INFO: Pod "pod-service-account-5b7dcb94-e51e-11ea-87d5-0242ac11000a-dq6mf" satisfied condition "success or failure" Aug 23 08:55:26.030: INFO: Trying to get logs from node hunter-worker2 pod pod-service-account-5b7dcb94-e51e-11ea-87d5-0242ac11000a-dq6mf container token-test: STEP: delete the pod Aug 23 08:55:26.418: INFO: Waiting for pod pod-service-account-5b7dcb94-e51e-11ea-87d5-0242ac11000a-dq6mf to disappear Aug 23 08:55:26.438: INFO: Pod pod-service-account-5b7dcb94-e51e-11ea-87d5-0242ac11000a-dq6mf no longer exists STEP: Creating a pod to test consume service account root CA Aug 23 08:55:26.441: INFO: Waiting up to 5m0s for pod "pod-service-account-5b7dcb94-e51e-11ea-87d5-0242ac11000a-w97hc" in namespace "e2e-tests-svcaccounts-lnppc" to be "success or failure" Aug 23 08:55:26.556: INFO: Pod "pod-service-account-5b7dcb94-e51e-11ea-87d5-0242ac11000a-w97hc": Phase="Pending", Reason="", readiness=false. Elapsed: 115.054475ms Aug 23 08:55:28.633: INFO: Pod "pod-service-account-5b7dcb94-e51e-11ea-87d5-0242ac11000a-w97hc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.191650517s Aug 23 08:55:30.636: INFO: Pod "pod-service-account-5b7dcb94-e51e-11ea-87d5-0242ac11000a-w97hc": Phase="Pending", Reason="", readiness=false. Elapsed: 4.194338779s Aug 23 08:55:32.638: INFO: Pod "pod-service-account-5b7dcb94-e51e-11ea-87d5-0242ac11000a-w97hc": Phase="Pending", Reason="", readiness=false. Elapsed: 6.196809802s Aug 23 08:55:34.641: INFO: Pod "pod-service-account-5b7dcb94-e51e-11ea-87d5-0242ac11000a-w97hc": Phase="Running", Reason="", readiness=false. Elapsed: 8.199838948s Aug 23 08:55:36.644: INFO: Pod "pod-service-account-5b7dcb94-e51e-11ea-87d5-0242ac11000a-w97hc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.202666483s STEP: Saw pod success Aug 23 08:55:36.644: INFO: Pod "pod-service-account-5b7dcb94-e51e-11ea-87d5-0242ac11000a-w97hc" satisfied condition "success or failure" Aug 23 08:55:36.646: INFO: Trying to get logs from node hunter-worker2 pod pod-service-account-5b7dcb94-e51e-11ea-87d5-0242ac11000a-w97hc container root-ca-test: STEP: delete the pod Aug 23 08:55:36.669: INFO: Waiting for pod pod-service-account-5b7dcb94-e51e-11ea-87d5-0242ac11000a-w97hc to disappear Aug 23 08:55:36.725: INFO: Pod pod-service-account-5b7dcb94-e51e-11ea-87d5-0242ac11000a-w97hc no longer exists STEP: Creating a pod to test consume service account namespace Aug 23 08:55:36.746: INFO: Waiting up to 5m0s for pod "pod-service-account-5b7dcb94-e51e-11ea-87d5-0242ac11000a-g2b5g" in namespace "e2e-tests-svcaccounts-lnppc" to be "success or failure" Aug 23 08:55:36.776: INFO: Pod "pod-service-account-5b7dcb94-e51e-11ea-87d5-0242ac11000a-g2b5g": Phase="Pending", Reason="", readiness=false. Elapsed: 30.142008ms Aug 23 08:55:39.179: INFO: Pod "pod-service-account-5b7dcb94-e51e-11ea-87d5-0242ac11000a-g2b5g": Phase="Pending", Reason="", readiness=false. Elapsed: 2.433479592s Aug 23 08:55:41.268: INFO: Pod "pod-service-account-5b7dcb94-e51e-11ea-87d5-0242ac11000a-g2b5g": Phase="Pending", Reason="", readiness=false. Elapsed: 4.522325678s Aug 23 08:55:43.338: INFO: Pod "pod-service-account-5b7dcb94-e51e-11ea-87d5-0242ac11000a-g2b5g": Phase="Pending", Reason="", readiness=false. Elapsed: 6.592766924s Aug 23 08:55:45.415: INFO: Pod "pod-service-account-5b7dcb94-e51e-11ea-87d5-0242ac11000a-g2b5g": Phase="Pending", Reason="", readiness=false. Elapsed: 8.669625875s Aug 23 08:55:47.418: INFO: Pod "pod-service-account-5b7dcb94-e51e-11ea-87d5-0242ac11000a-g2b5g": Phase="Running", Reason="", readiness=false. Elapsed: 10.672655484s Aug 23 08:55:49.422: INFO: Pod "pod-service-account-5b7dcb94-e51e-11ea-87d5-0242ac11000a-g2b5g": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.676171188s STEP: Saw pod success Aug 23 08:55:49.422: INFO: Pod "pod-service-account-5b7dcb94-e51e-11ea-87d5-0242ac11000a-g2b5g" satisfied condition "success or failure" Aug 23 08:55:49.424: INFO: Trying to get logs from node hunter-worker pod pod-service-account-5b7dcb94-e51e-11ea-87d5-0242ac11000a-g2b5g container namespace-test: STEP: delete the pod Aug 23 08:55:49.764: INFO: Waiting for pod pod-service-account-5b7dcb94-e51e-11ea-87d5-0242ac11000a-g2b5g to disappear Aug 23 08:55:49.780: INFO: Pod pod-service-account-5b7dcb94-e51e-11ea-87d5-0242ac11000a-g2b5g no longer exists [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 23 08:55:49.780: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-svcaccounts-lnppc" for this suite. Aug 23 08:55:57.830: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 23 08:55:57.914: INFO: namespace: e2e-tests-svcaccounts-lnppc, resource: bindings, ignored listing per whitelist Aug 23 08:55:57.946: INFO: namespace e2e-tests-svcaccounts-lnppc deletion completed in 8.160448528s • [SLOW TEST:45.099 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:22 should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 23 08:55:57.946: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Aug 23 08:55:58.047: INFO: Waiting up to 5m0s for pod "downwardapi-volume-760ab8ed-e51e-11ea-87d5-0242ac11000a" in namespace "e2e-tests-projected-x6xmt" to be "success or failure" Aug 23 08:55:58.068: INFO: Pod "downwardapi-volume-760ab8ed-e51e-11ea-87d5-0242ac11000a": Phase="Pending", Reason="", readiness=false. Elapsed: 21.077052ms Aug 23 08:56:00.071: INFO: Pod "downwardapi-volume-760ab8ed-e51e-11ea-87d5-0242ac11000a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024095471s Aug 23 08:56:02.394: INFO: Pod "downwardapi-volume-760ab8ed-e51e-11ea-87d5-0242ac11000a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.347485603s Aug 23 08:56:04.669: INFO: Pod "downwardapi-volume-760ab8ed-e51e-11ea-87d5-0242ac11000a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.622795092s STEP: Saw pod success Aug 23 08:56:04.669: INFO: Pod "downwardapi-volume-760ab8ed-e51e-11ea-87d5-0242ac11000a" satisfied condition "success or failure" Aug 23 08:56:04.672: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-760ab8ed-e51e-11ea-87d5-0242ac11000a container client-container: STEP: delete the pod Aug 23 08:56:04.844: INFO: Waiting for pod downwardapi-volume-760ab8ed-e51e-11ea-87d5-0242ac11000a to disappear Aug 23 08:56:04.907: INFO: Pod downwardapi-volume-760ab8ed-e51e-11ea-87d5-0242ac11000a no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 23 08:56:04.907: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-x6xmt" for this suite. Aug 23 08:56:13.000: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 23 08:56:13.036: INFO: namespace: e2e-tests-projected-x6xmt, resource: bindings, ignored listing per whitelist Aug 23 08:56:13.064: INFO: namespace e2e-tests-projected-x6xmt deletion completed in 8.15384229s • [SLOW TEST:15.118 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl rolling-update should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 23 08:56:13.065: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1358 [It] should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine Aug 23 08:56:13.141: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=e2e-tests-kubectl-9nc89' Aug 23 08:56:15.809: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Aug 23 08:56:15.809: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n" STEP: verifying the rc e2e-test-nginx-rc was created Aug 23 08:56:15.824: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0 STEP: rolling-update to same image controller Aug 23 08:56:15.849: INFO: scanned /root for discovery docs: Aug 23 08:56:15.850: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update e2e-test-nginx-rc --update-period=1s --image=docker.io/library/nginx:1.14-alpine --image-pull-policy=IfNotPresent --namespace=e2e-tests-kubectl-9nc89' Aug 23 08:56:34.642: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" Aug 23 08:56:34.642: INFO: stdout: "Created e2e-test-nginx-rc-ed578157346a2779099300278dd6a2bc\nScaling up e2e-test-nginx-rc-ed578157346a2779099300278dd6a2bc from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-ed578157346a2779099300278dd6a2bc up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-ed578157346a2779099300278dd6a2bc to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n" Aug 23 08:56:34.642: INFO: stdout: "Created e2e-test-nginx-rc-ed578157346a2779099300278dd6a2bc\nScaling up e2e-test-nginx-rc-ed578157346a2779099300278dd6a2bc from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-ed578157346a2779099300278dd6a2bc up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-ed578157346a2779099300278dd6a2bc to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n" STEP: waiting for all containers in run=e2e-test-nginx-rc pods to come up. Aug 23 08:56:34.642: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=e2e-tests-kubectl-9nc89' Aug 23 08:56:34.771: INFO: stderr: "" Aug 23 08:56:34.771: INFO: stdout: "e2e-test-nginx-rc-ed578157346a2779099300278dd6a2bc-ddjc4 " Aug 23 08:56:34.771: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-ed578157346a2779099300278dd6a2bc-ddjc4 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-nginx-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-9nc89' Aug 23 08:56:34.906: INFO: stderr: "" Aug 23 08:56:34.906: INFO: stdout: "true" Aug 23 08:56:34.906: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-ed578157346a2779099300278dd6a2bc-ddjc4 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-nginx-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-9nc89' Aug 23 08:56:35.010: INFO: stderr: "" Aug 23 08:56:35.010: INFO: stdout: "docker.io/library/nginx:1.14-alpine" Aug 23 08:56:35.010: INFO: e2e-test-nginx-rc-ed578157346a2779099300278dd6a2bc-ddjc4 is verified up and running [AfterEach] [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1364 Aug 23 08:56:35.011: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=e2e-tests-kubectl-9nc89' Aug 23 08:56:35.179: INFO: stderr: "" Aug 23 08:56:35.179: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 23 08:56:35.179: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-9nc89" for this suite. Aug 23 08:57:01.232: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 23 08:57:01.520: INFO: namespace: e2e-tests-kubectl-9nc89, resource: bindings, ignored listing per whitelist Aug 23 08:57:01.567: INFO: namespace e2e-tests-kubectl-9nc89 deletion completed in 26.364808997s • [SLOW TEST:48.503 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 23 08:57:01.568: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with secret that has name projected-secret-test-9c4db222-e51e-11ea-87d5-0242ac11000a STEP: Creating a pod to test consume secrets Aug 23 08:57:02.492: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-9c5405fb-e51e-11ea-87d5-0242ac11000a" in namespace "e2e-tests-projected-nnzhc" to be "success or failure" Aug 23 08:57:02.538: INFO: Pod "pod-projected-secrets-9c5405fb-e51e-11ea-87d5-0242ac11000a": Phase="Pending", Reason="", readiness=false. Elapsed: 45.28837ms Aug 23 08:57:04.748: INFO: Pod "pod-projected-secrets-9c5405fb-e51e-11ea-87d5-0242ac11000a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.255328819s Aug 23 08:57:06.751: INFO: Pod "pod-projected-secrets-9c5405fb-e51e-11ea-87d5-0242ac11000a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.25861341s Aug 23 08:57:08.754: INFO: Pod "pod-projected-secrets-9c5405fb-e51e-11ea-87d5-0242ac11000a": Phase="Running", Reason="", readiness=true. Elapsed: 6.261914515s Aug 23 08:57:10.758: INFO: Pod "pod-projected-secrets-9c5405fb-e51e-11ea-87d5-0242ac11000a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.265365346s STEP: Saw pod success Aug 23 08:57:10.758: INFO: Pod "pod-projected-secrets-9c5405fb-e51e-11ea-87d5-0242ac11000a" satisfied condition "success or failure" Aug 23 08:57:10.760: INFO: Trying to get logs from node hunter-worker pod pod-projected-secrets-9c5405fb-e51e-11ea-87d5-0242ac11000a container projected-secret-volume-test: STEP: delete the pod Aug 23 08:57:11.042: INFO: Waiting for pod pod-projected-secrets-9c5405fb-e51e-11ea-87d5-0242ac11000a to disappear Aug 23 08:57:11.226: INFO: Pod pod-projected-secrets-9c5405fb-e51e-11ea-87d5-0242ac11000a no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 23 08:57:11.226: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-nnzhc" for this suite. Aug 23 08:57:17.286: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 23 08:57:17.313: INFO: namespace: e2e-tests-projected-nnzhc, resource: bindings, ignored listing per whitelist Aug 23 08:57:17.338: INFO: namespace e2e-tests-projected-nnzhc deletion completed in 6.108650874s • [SLOW TEST:15.770 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 23 08:57:17.338: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0644 on node default medium Aug 23 08:57:18.475: INFO: Waiting up to 5m0s for pod "pod-a5dc279b-e51e-11ea-87d5-0242ac11000a" in namespace "e2e-tests-emptydir-z5nmz" to be "success or failure" Aug 23 08:57:18.522: INFO: Pod "pod-a5dc279b-e51e-11ea-87d5-0242ac11000a": Phase="Pending", Reason="", readiness=false. Elapsed: 47.298481ms Aug 23 08:57:20.526: INFO: Pod "pod-a5dc279b-e51e-11ea-87d5-0242ac11000a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.050847718s Aug 23 08:57:22.550: INFO: Pod "pod-a5dc279b-e51e-11ea-87d5-0242ac11000a": Phase="Running", Reason="", readiness=true. Elapsed: 4.075017411s Aug 23 08:57:24.640: INFO: Pod "pod-a5dc279b-e51e-11ea-87d5-0242ac11000a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.165346871s STEP: Saw pod success Aug 23 08:57:24.640: INFO: Pod "pod-a5dc279b-e51e-11ea-87d5-0242ac11000a" satisfied condition "success or failure" Aug 23 08:57:25.044: INFO: Trying to get logs from node hunter-worker2 pod pod-a5dc279b-e51e-11ea-87d5-0242ac11000a container test-container: STEP: delete the pod Aug 23 08:57:25.505: INFO: Waiting for pod pod-a5dc279b-e51e-11ea-87d5-0242ac11000a to disappear Aug 23 08:57:25.544: INFO: Pod pod-a5dc279b-e51e-11ea-87d5-0242ac11000a no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 23 08:57:25.544: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-z5nmz" for this suite. Aug 23 08:57:31.603: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 23 08:57:31.614: INFO: namespace: e2e-tests-emptydir-z5nmz, resource: bindings, ignored listing per whitelist Aug 23 08:57:31.685: INFO: namespace e2e-tests-emptydir-z5nmz deletion completed in 6.138562215s • [SLOW TEST:14.347 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 23 08:57:31.686: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Aug 23 08:57:32.325: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ae33723a-e51e-11ea-87d5-0242ac11000a" in namespace "e2e-tests-projected-q4s78" to be "success or failure" Aug 23 08:57:32.437: INFO: Pod "downwardapi-volume-ae33723a-e51e-11ea-87d5-0242ac11000a": Phase="Pending", Reason="", readiness=false. Elapsed: 111.714667ms Aug 23 08:57:34.446: INFO: Pod "downwardapi-volume-ae33723a-e51e-11ea-87d5-0242ac11000a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.121583146s Aug 23 08:57:36.503: INFO: Pod "downwardapi-volume-ae33723a-e51e-11ea-87d5-0242ac11000a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.178198403s Aug 23 08:57:38.545: INFO: Pod "downwardapi-volume-ae33723a-e51e-11ea-87d5-0242ac11000a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.220065457s Aug 23 08:57:40.548: INFO: Pod "downwardapi-volume-ae33723a-e51e-11ea-87d5-0242ac11000a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.222878255s STEP: Saw pod success Aug 23 08:57:40.548: INFO: Pod "downwardapi-volume-ae33723a-e51e-11ea-87d5-0242ac11000a" satisfied condition "success or failure" Aug 23 08:57:40.630: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-ae33723a-e51e-11ea-87d5-0242ac11000a container client-container: STEP: delete the pod Aug 23 08:57:40.797: INFO: Waiting for pod downwardapi-volume-ae33723a-e51e-11ea-87d5-0242ac11000a to disappear Aug 23 08:57:40.893: INFO: Pod downwardapi-volume-ae33723a-e51e-11ea-87d5-0242ac11000a no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 23 08:57:40.893: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-q4s78" for this suite. Aug 23 08:57:51.124: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 23 08:57:51.147: INFO: namespace: e2e-tests-projected-q4s78, resource: bindings, ignored listing per whitelist Aug 23 08:57:51.190: INFO: namespace e2e-tests-projected-q4s78 deletion completed in 10.209826808s • [SLOW TEST:19.505 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 23 08:57:51.191: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name s-test-opt-del-ba29d765-e51e-11ea-87d5-0242ac11000a STEP: Creating secret with name s-test-opt-upd-ba29d7fb-e51e-11ea-87d5-0242ac11000a STEP: Creating the pod STEP: Deleting secret s-test-opt-del-ba29d765-e51e-11ea-87d5-0242ac11000a STEP: Updating secret s-test-opt-upd-ba29d7fb-e51e-11ea-87d5-0242ac11000a STEP: Creating secret with name s-test-opt-create-ba29d838-e51e-11ea-87d5-0242ac11000a STEP: waiting to observe update in volume [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 23 08:59:15.989: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-8fckx" for this suite. Aug 23 08:59:40.033: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 23 08:59:40.147: INFO: namespace: e2e-tests-secrets-8fckx, resource: bindings, ignored listing per whitelist Aug 23 08:59:40.147: INFO: namespace e2e-tests-secrets-8fckx deletion completed in 24.155268848s • [SLOW TEST:108.956 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 23 08:59:40.147: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not cause race condition when used for configmaps [Serial] [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating 50 configmaps STEP: Creating RC which spawns configmap-volume pods Aug 23 08:59:43.358: INFO: Pod name wrapped-volume-race-fc39d025-e51e-11ea-87d5-0242ac11000a: Found 0 pods out of 5 Aug 23 08:59:48.367: INFO: Pod name wrapped-volume-race-fc39d025-e51e-11ea-87d5-0242ac11000a: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-fc39d025-e51e-11ea-87d5-0242ac11000a in namespace e2e-tests-emptydir-wrapper-vq7ln, will wait for the garbage collector to delete the pods Aug 23 09:02:24.451: INFO: Deleting ReplicationController wrapped-volume-race-fc39d025-e51e-11ea-87d5-0242ac11000a took: 8.048462ms Aug 23 09:02:24.651: INFO: Terminating ReplicationController wrapped-volume-race-fc39d025-e51e-11ea-87d5-0242ac11000a pods took: 200.274397ms STEP: Creating RC which spawns configmap-volume pods Aug 23 09:03:08.619: INFO: Pod name wrapped-volume-race-76a56d9b-e51f-11ea-87d5-0242ac11000a: Found 0 pods out of 5 Aug 23 09:03:13.627: INFO: Pod name wrapped-volume-race-76a56d9b-e51f-11ea-87d5-0242ac11000a: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-76a56d9b-e51f-11ea-87d5-0242ac11000a in namespace e2e-tests-emptydir-wrapper-vq7ln, will wait for the garbage collector to delete the pods Aug 23 09:05:57.726: INFO: Deleting ReplicationController wrapped-volume-race-76a56d9b-e51f-11ea-87d5-0242ac11000a took: 5.259147ms Aug 23 09:05:57.927: INFO: Terminating ReplicationController wrapped-volume-race-76a56d9b-e51f-11ea-87d5-0242ac11000a pods took: 200.287936ms STEP: Creating RC which spawns configmap-volume pods Aug 23 09:06:38.360: INFO: Pod name wrapped-volume-race-f3ae9ba5-e51f-11ea-87d5-0242ac11000a: Found 0 pods out of 5 Aug 23 09:06:43.366: INFO: Pod name wrapped-volume-race-f3ae9ba5-e51f-11ea-87d5-0242ac11000a: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-f3ae9ba5-e51f-11ea-87d5-0242ac11000a in namespace e2e-tests-emptydir-wrapper-vq7ln, will wait for the garbage collector to delete the pods Aug 23 09:08:39.725: INFO: Deleting ReplicationController wrapped-volume-race-f3ae9ba5-e51f-11ea-87d5-0242ac11000a took: 5.429705ms Aug 23 09:08:39.825: INFO: Terminating ReplicationController wrapped-volume-race-f3ae9ba5-e51f-11ea-87d5-0242ac11000a pods took: 100.213912ms STEP: Cleaning up the configMaps [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 23 09:09:30.284: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-wrapper-vq7ln" for this suite. Aug 23 09:09:40.301: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 23 09:09:40.325: INFO: namespace: e2e-tests-emptydir-wrapper-vq7ln, resource: bindings, ignored listing per whitelist Aug 23 09:09:40.375: INFO: namespace e2e-tests-emptydir-wrapper-vq7ln deletion completed in 10.084080868s • [SLOW TEST:600.228 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 should not cause race condition when used for configmaps [Serial] [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run rc should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 23 09:09:40.375: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1298 [It] should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine Aug 23 09:09:40.965: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=e2e-tests-kubectl-m7zh6' Aug 23 09:09:46.005: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Aug 23 09:09:46.006: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n" STEP: verifying the rc e2e-test-nginx-rc was created STEP: verifying the pod controlled by rc e2e-test-nginx-rc was created STEP: confirm that you can get logs from an rc Aug 23 09:09:48.021: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-nginx-rc-762p2] Aug 23 09:09:48.021: INFO: Waiting up to 5m0s for pod "e2e-test-nginx-rc-762p2" in namespace "e2e-tests-kubectl-m7zh6" to be "running and ready" Aug 23 09:09:48.024: INFO: Pod "e2e-test-nginx-rc-762p2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.552703ms Aug 23 09:09:50.219: INFO: Pod "e2e-test-nginx-rc-762p2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.197215164s Aug 23 09:09:52.222: INFO: Pod "e2e-test-nginx-rc-762p2": Phase="Running", Reason="", readiness=true. Elapsed: 4.200672626s Aug 23 09:09:52.222: INFO: Pod "e2e-test-nginx-rc-762p2" satisfied condition "running and ready" Aug 23 09:09:52.222: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-nginx-rc-762p2] Aug 23 09:09:52.222: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs rc/e2e-test-nginx-rc --namespace=e2e-tests-kubectl-m7zh6' Aug 23 09:09:52.358: INFO: stderr: "" Aug 23 09:09:52.358: INFO: stdout: "" [AfterEach] [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1303 Aug 23 09:09:52.358: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=e2e-tests-kubectl-m7zh6' Aug 23 09:09:52.474: INFO: stderr: "" Aug 23 09:09:52.474: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 23 09:09:52.474: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-m7zh6" for this suite. Aug 23 09:10:00.849: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 23 09:10:01.036: INFO: namespace: e2e-tests-kubectl-m7zh6, resource: bindings, ignored listing per whitelist Aug 23 09:10:01.803: INFO: namespace e2e-tests-kubectl-m7zh6 deletion completed in 9.326720556s • [SLOW TEST:21.429 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run --rm job should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 23 09:10:01.804: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: executing a command with run --rm and attach with stdin Aug 23 09:10:01.951: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=e2e-tests-kubectl-jbdwc run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed'' Aug 23 09:10:06.915: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\nI0823 09:10:06.865416 269 log.go:172] (0xc0005b80b0) (0xc00078a500) Create stream\nI0823 09:10:06.865469 269 log.go:172] (0xc0005b80b0) (0xc00078a500) Stream added, broadcasting: 1\nI0823 09:10:06.867591 269 log.go:172] (0xc0005b80b0) Reply frame received for 1\nI0823 09:10:06.867637 269 log.go:172] (0xc0005b80b0) (0xc0007c4aa0) Create stream\nI0823 09:10:06.867650 269 log.go:172] (0xc0005b80b0) (0xc0007c4aa0) Stream added, broadcasting: 3\nI0823 09:10:06.868519 269 log.go:172] (0xc0005b80b0) Reply frame received for 3\nI0823 09:10:06.868547 269 log.go:172] (0xc0005b80b0) (0xc00078a5a0) Create stream\nI0823 09:10:06.868554 269 log.go:172] (0xc0005b80b0) (0xc00078a5a0) Stream added, broadcasting: 5\nI0823 09:10:06.869597 269 log.go:172] (0xc0005b80b0) Reply frame received for 5\nI0823 09:10:06.869638 269 log.go:172] (0xc0005b80b0) (0xc000830000) Create stream\nI0823 09:10:06.869656 269 log.go:172] (0xc0005b80b0) (0xc000830000) Stream added, broadcasting: 7\nI0823 09:10:06.870480 269 log.go:172] (0xc0005b80b0) Reply frame received for 7\nI0823 09:10:06.870606 269 log.go:172] (0xc0007c4aa0) (3) Writing data frame\nI0823 09:10:06.870694 269 log.go:172] (0xc0007c4aa0) (3) Writing data frame\nI0823 09:10:06.871734 269 log.go:172] (0xc0005b80b0) Data frame received for 5\nI0823 09:10:06.871757 269 log.go:172] (0xc00078a5a0) (5) Data frame handling\nI0823 09:10:06.871772 269 log.go:172] (0xc00078a5a0) (5) Data frame sent\nI0823 09:10:06.872420 269 log.go:172] (0xc0005b80b0) Data frame received for 5\nI0823 09:10:06.872438 269 log.go:172] (0xc00078a5a0) (5) Data frame handling\nI0823 09:10:06.872453 269 log.go:172] (0xc00078a5a0) (5) Data frame sent\nI0823 09:10:06.890601 269 log.go:172] (0xc0005b80b0) Data frame received for 5\nI0823 09:10:06.890636 269 log.go:172] (0xc0005b80b0) Data frame received for 7\nI0823 09:10:06.890664 269 log.go:172] (0xc000830000) (7) Data frame handling\nI0823 09:10:06.890695 269 log.go:172] (0xc00078a5a0) (5) Data frame handling\nI0823 09:10:06.890921 269 log.go:172] (0xc0005b80b0) Data frame received for 1\nI0823 09:10:06.890943 269 log.go:172] (0xc00078a500) (1) Data frame handling\nI0823 09:10:06.890967 269 log.go:172] (0xc00078a500) (1) Data frame sent\nI0823 09:10:06.891051 269 log.go:172] (0xc0005b80b0) (0xc0007c4aa0) Stream removed, broadcasting: 3\nI0823 09:10:06.891112 269 log.go:172] (0xc0005b80b0) (0xc00078a500) Stream removed, broadcasting: 1\nI0823 09:10:06.891175 269 log.go:172] (0xc0005b80b0) Go away received\nI0823 09:10:06.891203 269 log.go:172] (0xc0005b80b0) (0xc00078a500) Stream removed, broadcasting: 1\nI0823 09:10:06.891239 269 log.go:172] (0xc0005b80b0) (0xc0007c4aa0) Stream removed, broadcasting: 3\nI0823 09:10:06.891266 269 log.go:172] (0xc0005b80b0) (0xc00078a5a0) Stream removed, broadcasting: 5\nI0823 09:10:06.891293 269 log.go:172] (0xc0005b80b0) (0xc000830000) Stream removed, broadcasting: 7\n" Aug 23 09:10:06.915: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n" STEP: verifying the job e2e-test-rm-busybox-job was deleted [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 23 09:10:09.015: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-jbdwc" for this suite. Aug 23 09:10:27.370: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 23 09:10:27.580: INFO: namespace: e2e-tests-kubectl-jbdwc, resource: bindings, ignored listing per whitelist Aug 23 09:10:27.587: INFO: namespace e2e-tests-kubectl-jbdwc deletion completed in 18.44404904s • [SLOW TEST:25.783 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run --rm job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 23 09:10:27.587: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name s-test-opt-del-7c9c00d3-e520-11ea-87d5-0242ac11000a STEP: Creating secret with name s-test-opt-upd-7c9c0151-e520-11ea-87d5-0242ac11000a STEP: Creating the pod STEP: Deleting secret s-test-opt-del-7c9c00d3-e520-11ea-87d5-0242ac11000a STEP: Updating secret s-test-opt-upd-7c9c0151-e520-11ea-87d5-0242ac11000a STEP: Creating secret with name s-test-opt-create-7c9c018e-e520-11ea-87d5-0242ac11000a STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 23 09:11:46.668: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-2ww2f" for this suite. Aug 23 09:12:10.697: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 23 09:12:10.822: INFO: namespace: e2e-tests-projected-2ww2f, resource: bindings, ignored listing per whitelist Aug 23 09:12:10.858: INFO: namespace e2e-tests-projected-2ww2f deletion completed in 24.172984642s • [SLOW TEST:103.271 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Guestbook application should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 23 09:12:10.858: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating all guestbook components Aug 23 09:12:10.956: INFO: apiVersion: v1 kind: Service metadata: name: redis-slave labels: app: redis role: slave tier: backend spec: ports: - port: 6379 selector: app: redis role: slave tier: backend Aug 23 09:12:10.956: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-gz6n4' Aug 23 09:12:11.324: INFO: stderr: "" Aug 23 09:12:11.324: INFO: stdout: "service/redis-slave created\n" Aug 23 09:12:11.324: INFO: apiVersion: v1 kind: Service metadata: name: redis-master labels: app: redis role: master tier: backend spec: ports: - port: 6379 targetPort: 6379 selector: app: redis role: master tier: backend Aug 23 09:12:11.324: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-gz6n4' Aug 23 09:12:11.623: INFO: stderr: "" Aug 23 09:12:11.623: INFO: stdout: "service/redis-master created\n" Aug 23 09:12:11.623: INFO: apiVersion: v1 kind: Service metadata: name: frontend labels: app: guestbook tier: frontend spec: # if your cluster supports it, uncomment the following to automatically create # an external load-balanced IP for the frontend service. # type: LoadBalancer ports: - port: 80 selector: app: guestbook tier: frontend Aug 23 09:12:11.623: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-gz6n4' Aug 23 09:12:11.964: INFO: stderr: "" Aug 23 09:12:11.964: INFO: stdout: "service/frontend created\n" Aug 23 09:12:11.964: INFO: apiVersion: extensions/v1beta1 kind: Deployment metadata: name: frontend spec: replicas: 3 template: metadata: labels: app: guestbook tier: frontend spec: containers: - name: php-redis image: gcr.io/google-samples/gb-frontend:v6 resources: requests: cpu: 100m memory: 100Mi env: - name: GET_HOSTS_FROM value: dns # If your cluster config does not include a dns service, then to # instead access environment variables to find service host # info, comment out the 'value: dns' line above, and uncomment the # line below: # value: env ports: - containerPort: 80 Aug 23 09:12:11.964: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-gz6n4' Aug 23 09:12:12.225: INFO: stderr: "" Aug 23 09:12:12.225: INFO: stdout: "deployment.extensions/frontend created\n" Aug 23 09:12:12.226: INFO: apiVersion: extensions/v1beta1 kind: Deployment metadata: name: redis-master spec: replicas: 1 template: metadata: labels: app: redis role: master tier: backend spec: containers: - name: master image: gcr.io/kubernetes-e2e-test-images/redis:1.0 resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Aug 23 09:12:12.226: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-gz6n4' Aug 23 09:12:12.582: INFO: stderr: "" Aug 23 09:12:12.582: INFO: stdout: "deployment.extensions/redis-master created\n" Aug 23 09:12:12.582: INFO: apiVersion: extensions/v1beta1 kind: Deployment metadata: name: redis-slave spec: replicas: 2 template: metadata: labels: app: redis role: slave tier: backend spec: containers: - name: slave image: gcr.io/google-samples/gb-redisslave:v3 resources: requests: cpu: 100m memory: 100Mi env: - name: GET_HOSTS_FROM value: dns # If your cluster config does not include a dns service, then to # instead access an environment variable to find the master # service's host, comment out the 'value: dns' line above, and # uncomment the line below: # value: env ports: - containerPort: 6379 Aug 23 09:12:12.583: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-gz6n4' Aug 23 09:12:12.924: INFO: stderr: "" Aug 23 09:12:12.924: INFO: stdout: "deployment.extensions/redis-slave created\n" STEP: validating guestbook app Aug 23 09:12:12.924: INFO: Waiting for all frontend pods to be Running. Aug 23 09:12:22.974: INFO: Waiting for frontend to serve content. Aug 23 09:12:23.117: INFO: Trying to add a new entry to the guestbook. Aug 23 09:12:23.134: INFO: Verifying that added entry can be retrieved. STEP: using delete to clean up resources Aug 23 09:12:23.146: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-gz6n4' Aug 23 09:12:23.498: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Aug 23 09:12:23.498: INFO: stdout: "service \"redis-slave\" force deleted\n" STEP: using delete to clean up resources Aug 23 09:12:23.498: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-gz6n4' Aug 23 09:12:23.962: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Aug 23 09:12:23.962: INFO: stdout: "service \"redis-master\" force deleted\n" STEP: using delete to clean up resources Aug 23 09:12:23.962: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-gz6n4' Aug 23 09:12:24.278: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Aug 23 09:12:24.278: INFO: stdout: "service \"frontend\" force deleted\n" STEP: using delete to clean up resources Aug 23 09:12:24.278: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-gz6n4' Aug 23 09:12:24.397: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Aug 23 09:12:24.397: INFO: stdout: "deployment.extensions \"frontend\" force deleted\n" STEP: using delete to clean up resources Aug 23 09:12:24.397: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-gz6n4' Aug 23 09:12:24.520: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Aug 23 09:12:24.520: INFO: stdout: "deployment.extensions \"redis-master\" force deleted\n" STEP: using delete to clean up resources Aug 23 09:12:24.520: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-gz6n4' Aug 23 09:12:24.810: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Aug 23 09:12:24.810: INFO: stdout: "deployment.extensions \"redis-slave\" force deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 23 09:12:24.810: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-gz6n4" for this suite. Aug 23 09:13:09.457: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 23 09:13:09.463: INFO: namespace: e2e-tests-kubectl-gz6n4, resource: bindings, ignored listing per whitelist Aug 23 09:13:09.714: INFO: namespace e2e-tests-kubectl-gz6n4 deletion completed in 44.695498121s • [SLOW TEST:58.856 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Guestbook application /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 23 09:13:09.714: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74 STEP: Creating service test in namespace e2e-tests-statefulset-f8kxm [It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Initializing watcher for selector baz=blah,foo=bar STEP: Creating stateful set ss in namespace e2e-tests-statefulset-f8kxm STEP: Waiting until all stateful set ss replicas will be running in namespace e2e-tests-statefulset-f8kxm Aug 23 09:13:11.460: INFO: Found 0 stateful pods, waiting for 1 Aug 23 09:13:21.464: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod Aug 23 09:13:21.466: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-f8kxm ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Aug 23 09:13:21.745: INFO: stderr: "I0823 09:13:21.579325 576 log.go:172] (0xc000138580) (0xc0000ef360) Create stream\nI0823 09:13:21.579378 576 log.go:172] (0xc000138580) (0xc0000ef360) Stream added, broadcasting: 1\nI0823 09:13:21.581815 576 log.go:172] (0xc000138580) Reply frame received for 1\nI0823 09:13:21.581872 576 log.go:172] (0xc000138580) (0xc0002c0000) Create stream\nI0823 09:13:21.581887 576 log.go:172] (0xc000138580) (0xc0002c0000) Stream added, broadcasting: 3\nI0823 09:13:21.582624 576 log.go:172] (0xc000138580) Reply frame received for 3\nI0823 09:13:21.582659 576 log.go:172] (0xc000138580) (0xc00048e000) Create stream\nI0823 09:13:21.582670 576 log.go:172] (0xc000138580) (0xc00048e000) Stream added, broadcasting: 5\nI0823 09:13:21.583419 576 log.go:172] (0xc000138580) Reply frame received for 5\nI0823 09:13:21.732607 576 log.go:172] (0xc000138580) Data frame received for 3\nI0823 09:13:21.732641 576 log.go:172] (0xc0002c0000) (3) Data frame handling\nI0823 09:13:21.732659 576 log.go:172] (0xc0002c0000) (3) Data frame sent\nI0823 09:13:21.732675 576 log.go:172] (0xc000138580) Data frame received for 3\nI0823 09:13:21.732690 576 log.go:172] (0xc0002c0000) (3) Data frame handling\nI0823 09:13:21.732796 576 log.go:172] (0xc000138580) Data frame received for 5\nI0823 09:13:21.732810 576 log.go:172] (0xc00048e000) (5) Data frame handling\nI0823 09:13:21.734546 576 log.go:172] (0xc000138580) Data frame received for 1\nI0823 09:13:21.734571 576 log.go:172] (0xc0000ef360) (1) Data frame handling\nI0823 09:13:21.734579 576 log.go:172] (0xc0000ef360) (1) Data frame sent\nI0823 09:13:21.734590 576 log.go:172] (0xc000138580) (0xc0000ef360) Stream removed, broadcasting: 1\nI0823 09:13:21.734607 576 log.go:172] (0xc000138580) Go away received\nI0823 09:13:21.734862 576 log.go:172] (0xc000138580) (0xc0000ef360) Stream removed, broadcasting: 1\nI0823 09:13:21.734884 576 log.go:172] (0xc000138580) (0xc0002c0000) Stream removed, broadcasting: 3\nI0823 09:13:21.734895 576 log.go:172] (0xc000138580) (0xc00048e000) Stream removed, broadcasting: 5\n" Aug 23 09:13:21.745: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Aug 23 09:13:21.745: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Aug 23 09:13:21.755: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Aug 23 09:13:31.760: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Aug 23 09:13:31.761: INFO: Waiting for statefulset status.replicas updated to 0 Aug 23 09:13:31.792: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999399s Aug 23 09:13:32.911: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.987607134s Aug 23 09:13:33.915: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.868910055s Aug 23 09:13:34.919: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.86470159s Aug 23 09:13:35.928: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.860197346s Aug 23 09:13:37.072: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.851468779s Aug 23 09:13:38.077: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.707419228s Aug 23 09:13:39.082: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.702337226s Aug 23 09:13:40.087: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.697619602s Aug 23 09:13:41.091: INFO: Verifying statefulset ss doesn't scale past 1 for another 692.910476ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace e2e-tests-statefulset-f8kxm Aug 23 09:13:42.094: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-f8kxm ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Aug 23 09:13:42.317: INFO: stderr: "I0823 09:13:42.239219 599 log.go:172] (0xc0006fc370) (0xc00071a640) Create stream\nI0823 09:13:42.239296 599 log.go:172] (0xc0006fc370) (0xc00071a640) Stream added, broadcasting: 1\nI0823 09:13:42.242377 599 log.go:172] (0xc0006fc370) Reply frame received for 1\nI0823 09:13:42.242443 599 log.go:172] (0xc0006fc370) (0xc000622d20) Create stream\nI0823 09:13:42.242463 599 log.go:172] (0xc0006fc370) (0xc000622d20) Stream added, broadcasting: 3\nI0823 09:13:42.243555 599 log.go:172] (0xc0006fc370) Reply frame received for 3\nI0823 09:13:42.243607 599 log.go:172] (0xc0006fc370) (0xc000620000) Create stream\nI0823 09:13:42.243628 599 log.go:172] (0xc0006fc370) (0xc000620000) Stream added, broadcasting: 5\nI0823 09:13:42.244600 599 log.go:172] (0xc0006fc370) Reply frame received for 5\nI0823 09:13:42.306044 599 log.go:172] (0xc0006fc370) Data frame received for 5\nI0823 09:13:42.306089 599 log.go:172] (0xc000620000) (5) Data frame handling\nI0823 09:13:42.306120 599 log.go:172] (0xc0006fc370) Data frame received for 3\nI0823 09:13:42.306136 599 log.go:172] (0xc000622d20) (3) Data frame handling\nI0823 09:13:42.306156 599 log.go:172] (0xc000622d20) (3) Data frame sent\nI0823 09:13:42.306178 599 log.go:172] (0xc0006fc370) Data frame received for 3\nI0823 09:13:42.306191 599 log.go:172] (0xc000622d20) (3) Data frame handling\nI0823 09:13:42.307286 599 log.go:172] (0xc0006fc370) Data frame received for 1\nI0823 09:13:42.307384 599 log.go:172] (0xc00071a640) (1) Data frame handling\nI0823 09:13:42.307432 599 log.go:172] (0xc00071a640) (1) Data frame sent\nI0823 09:13:42.307460 599 log.go:172] (0xc0006fc370) (0xc00071a640) Stream removed, broadcasting: 1\nI0823 09:13:42.307505 599 log.go:172] (0xc0006fc370) Go away received\nI0823 09:13:42.307846 599 log.go:172] (0xc0006fc370) (0xc00071a640) Stream removed, broadcasting: 1\nI0823 09:13:42.307883 599 log.go:172] (0xc0006fc370) (0xc000622d20) Stream removed, broadcasting: 3\nI0823 09:13:42.307908 599 log.go:172] (0xc0006fc370) (0xc000620000) Stream removed, broadcasting: 5\n" Aug 23 09:13:42.317: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Aug 23 09:13:42.317: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Aug 23 09:13:42.320: INFO: Found 1 stateful pods, waiting for 3 Aug 23 09:13:52.325: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Aug 23 09:13:52.325: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Aug 23 09:13:52.325: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=false Aug 23 09:14:02.325: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Aug 23 09:14:02.325: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Aug 23 09:14:02.325: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Verifying that stateful set ss was scaled up in order STEP: Scale down will halt with unhealthy stateful pod Aug 23 09:14:02.330: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-f8kxm ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Aug 23 09:14:02.532: INFO: stderr: "I0823 09:14:02.457398 622 log.go:172] (0xc00016c580) (0xc0005934a0) Create stream\nI0823 09:14:02.457448 622 log.go:172] (0xc00016c580) (0xc0005934a0) Stream added, broadcasting: 1\nI0823 09:14:02.466119 622 log.go:172] (0xc00016c580) Reply frame received for 1\nI0823 09:14:02.466154 622 log.go:172] (0xc00016c580) (0xc0001da000) Create stream\nI0823 09:14:02.466169 622 log.go:172] (0xc00016c580) (0xc0001da000) Stream added, broadcasting: 3\nI0823 09:14:02.467355 622 log.go:172] (0xc00016c580) Reply frame received for 3\nI0823 09:14:02.467387 622 log.go:172] (0xc00016c580) (0xc00001c000) Create stream\nI0823 09:14:02.467395 622 log.go:172] (0xc00016c580) (0xc00001c000) Stream added, broadcasting: 5\nI0823 09:14:02.468021 622 log.go:172] (0xc00016c580) Reply frame received for 5\nI0823 09:14:02.526378 622 log.go:172] (0xc00016c580) Data frame received for 3\nI0823 09:14:02.526423 622 log.go:172] (0xc0001da000) (3) Data frame handling\nI0823 09:14:02.526455 622 log.go:172] (0xc0001da000) (3) Data frame sent\nI0823 09:14:02.526471 622 log.go:172] (0xc00016c580) Data frame received for 3\nI0823 09:14:02.526486 622 log.go:172] (0xc0001da000) (3) Data frame handling\nI0823 09:14:02.526506 622 log.go:172] (0xc00016c580) Data frame received for 5\nI0823 09:14:02.526517 622 log.go:172] (0xc00001c000) (5) Data frame handling\nI0823 09:14:02.527628 622 log.go:172] (0xc00016c580) Data frame received for 1\nI0823 09:14:02.527650 622 log.go:172] (0xc0005934a0) (1) Data frame handling\nI0823 09:14:02.527674 622 log.go:172] (0xc0005934a0) (1) Data frame sent\nI0823 09:14:02.527721 622 log.go:172] (0xc00016c580) (0xc0005934a0) Stream removed, broadcasting: 1\nI0823 09:14:02.527775 622 log.go:172] (0xc00016c580) Go away received\nI0823 09:14:02.527875 622 log.go:172] (0xc00016c580) (0xc0005934a0) Stream removed, broadcasting: 1\nI0823 09:14:02.527887 622 log.go:172] (0xc00016c580) (0xc0001da000) Stream removed, broadcasting: 3\nI0823 09:14:02.527897 622 log.go:172] (0xc00016c580) (0xc00001c000) Stream removed, broadcasting: 5\n" Aug 23 09:14:02.532: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Aug 23 09:14:02.532: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Aug 23 09:14:02.532: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-f8kxm ss-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Aug 23 09:14:02.793: INFO: stderr: "I0823 09:14:02.667329 644 log.go:172] (0xc0001380b0) (0xc000610460) Create stream\nI0823 09:14:02.667372 644 log.go:172] (0xc0001380b0) (0xc000610460) Stream added, broadcasting: 1\nI0823 09:14:02.672494 644 log.go:172] (0xc0001380b0) Reply frame received for 1\nI0823 09:14:02.672543 644 log.go:172] (0xc0001380b0) (0xc0000d2000) Create stream\nI0823 09:14:02.672563 644 log.go:172] (0xc0001380b0) (0xc0000d2000) Stream added, broadcasting: 3\nI0823 09:14:02.673392 644 log.go:172] (0xc0001380b0) Reply frame received for 3\nI0823 09:14:02.673424 644 log.go:172] (0xc0001380b0) (0xc000610500) Create stream\nI0823 09:14:02.673444 644 log.go:172] (0xc0001380b0) (0xc000610500) Stream added, broadcasting: 5\nI0823 09:14:02.674133 644 log.go:172] (0xc0001380b0) Reply frame received for 5\nI0823 09:14:02.782382 644 log.go:172] (0xc0001380b0) Data frame received for 5\nI0823 09:14:02.782425 644 log.go:172] (0xc000610500) (5) Data frame handling\nI0823 09:14:02.782452 644 log.go:172] (0xc0001380b0) Data frame received for 3\nI0823 09:14:02.782465 644 log.go:172] (0xc0000d2000) (3) Data frame handling\nI0823 09:14:02.782478 644 log.go:172] (0xc0000d2000) (3) Data frame sent\nI0823 09:14:02.782766 644 log.go:172] (0xc0001380b0) Data frame received for 3\nI0823 09:14:02.782806 644 log.go:172] (0xc0000d2000) (3) Data frame handling\nI0823 09:14:02.783891 644 log.go:172] (0xc0001380b0) Data frame received for 1\nI0823 09:14:02.783911 644 log.go:172] (0xc000610460) (1) Data frame handling\nI0823 09:14:02.783923 644 log.go:172] (0xc000610460) (1) Data frame sent\nI0823 09:14:02.783935 644 log.go:172] (0xc0001380b0) (0xc000610460) Stream removed, broadcasting: 1\nI0823 09:14:02.783953 644 log.go:172] (0xc0001380b0) Go away received\nI0823 09:14:02.784129 644 log.go:172] (0xc0001380b0) (0xc000610460) Stream removed, broadcasting: 1\nI0823 09:14:02.784143 644 log.go:172] (0xc0001380b0) (0xc0000d2000) Stream removed, broadcasting: 3\nI0823 09:14:02.784151 644 log.go:172] (0xc0001380b0) (0xc000610500) Stream removed, broadcasting: 5\n" Aug 23 09:14:02.793: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Aug 23 09:14:02.793: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Aug 23 09:14:02.793: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-f8kxm ss-2 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Aug 23 09:14:03.031: INFO: stderr: "I0823 09:14:02.913892 665 log.go:172] (0xc00075a4d0) (0xc0005f74a0) Create stream\nI0823 09:14:02.913927 665 log.go:172] (0xc00075a4d0) (0xc0005f74a0) Stream added, broadcasting: 1\nI0823 09:14:02.915192 665 log.go:172] (0xc00075a4d0) Reply frame received for 1\nI0823 09:14:02.915219 665 log.go:172] (0xc00075a4d0) (0xc0006fa000) Create stream\nI0823 09:14:02.915230 665 log.go:172] (0xc00075a4d0) (0xc0006fa000) Stream added, broadcasting: 3\nI0823 09:14:02.915815 665 log.go:172] (0xc00075a4d0) Reply frame received for 3\nI0823 09:14:02.915844 665 log.go:172] (0xc00075a4d0) (0xc0005f7540) Create stream\nI0823 09:14:02.915855 665 log.go:172] (0xc00075a4d0) (0xc0005f7540) Stream added, broadcasting: 5\nI0823 09:14:02.916557 665 log.go:172] (0xc00075a4d0) Reply frame received for 5\nI0823 09:14:03.022283 665 log.go:172] (0xc00075a4d0) Data frame received for 5\nI0823 09:14:03.022302 665 log.go:172] (0xc0005f7540) (5) Data frame handling\nI0823 09:14:03.022314 665 log.go:172] (0xc00075a4d0) Data frame received for 3\nI0823 09:14:03.022318 665 log.go:172] (0xc0006fa000) (3) Data frame handling\nI0823 09:14:03.022325 665 log.go:172] (0xc0006fa000) (3) Data frame sent\nI0823 09:14:03.022424 665 log.go:172] (0xc00075a4d0) Data frame received for 3\nI0823 09:14:03.022453 665 log.go:172] (0xc0006fa000) (3) Data frame handling\nI0823 09:14:03.024310 665 log.go:172] (0xc00075a4d0) Data frame received for 1\nI0823 09:14:03.024321 665 log.go:172] (0xc0005f74a0) (1) Data frame handling\nI0823 09:14:03.024330 665 log.go:172] (0xc0005f74a0) (1) Data frame sent\nI0823 09:14:03.024540 665 log.go:172] (0xc00075a4d0) (0xc0005f74a0) Stream removed, broadcasting: 1\nI0823 09:14:03.024595 665 log.go:172] (0xc00075a4d0) Go away received\nI0823 09:14:03.024877 665 log.go:172] (0xc00075a4d0) (0xc0005f74a0) Stream removed, broadcasting: 1\nI0823 09:14:03.024907 665 log.go:172] (0xc00075a4d0) (0xc0006fa000) Stream removed, broadcasting: 3\nI0823 09:14:03.024927 665 log.go:172] (0xc00075a4d0) (0xc0005f7540) Stream removed, broadcasting: 5\n" Aug 23 09:14:03.031: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Aug 23 09:14:03.031: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Aug 23 09:14:03.031: INFO: Waiting for statefulset status.replicas updated to 0 Aug 23 09:14:03.033: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 3 Aug 23 09:14:13.041: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Aug 23 09:14:13.041: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Aug 23 09:14:13.041: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Aug 23 09:14:13.141: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999691s Aug 23 09:14:14.146: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.905195478s Aug 23 09:14:15.168: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.900761706s Aug 23 09:14:16.172: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.878734182s Aug 23 09:14:17.176: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.874677309s Aug 23 09:14:18.179: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.870950413s Aug 23 09:14:19.184: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.867078858s Aug 23 09:14:20.188: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.863020119s Aug 23 09:14:21.192: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.858223447s Aug 23 09:14:22.197: INFO: Verifying statefulset ss doesn't scale past 3 for another 854.056789ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacee2e-tests-statefulset-f8kxm Aug 23 09:14:23.201: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-f8kxm ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Aug 23 09:14:23.410: INFO: stderr: "I0823 09:14:23.322707 688 log.go:172] (0xc0006e62c0) (0xc0007345a0) Create stream\nI0823 09:14:23.322759 688 log.go:172] (0xc0006e62c0) (0xc0007345a0) Stream added, broadcasting: 1\nI0823 09:14:23.324765 688 log.go:172] (0xc0006e62c0) Reply frame received for 1\nI0823 09:14:23.324824 688 log.go:172] (0xc0006e62c0) (0xc000354dc0) Create stream\nI0823 09:14:23.324835 688 log.go:172] (0xc0006e62c0) (0xc000354dc0) Stream added, broadcasting: 3\nI0823 09:14:23.325725 688 log.go:172] (0xc0006e62c0) Reply frame received for 3\nI0823 09:14:23.325742 688 log.go:172] (0xc0006e62c0) (0xc000734640) Create stream\nI0823 09:14:23.325749 688 log.go:172] (0xc0006e62c0) (0xc000734640) Stream added, broadcasting: 5\nI0823 09:14:23.326774 688 log.go:172] (0xc0006e62c0) Reply frame received for 5\nI0823 09:14:23.404896 688 log.go:172] (0xc0006e62c0) Data frame received for 3\nI0823 09:14:23.404919 688 log.go:172] (0xc000354dc0) (3) Data frame handling\nI0823 09:14:23.404929 688 log.go:172] (0xc000354dc0) (3) Data frame sent\nI0823 09:14:23.405200 688 log.go:172] (0xc0006e62c0) Data frame received for 5\nI0823 09:14:23.405225 688 log.go:172] (0xc0006e62c0) Data frame received for 3\nI0823 09:14:23.405237 688 log.go:172] (0xc000354dc0) (3) Data frame handling\nI0823 09:14:23.405251 688 log.go:172] (0xc000734640) (5) Data frame handling\nI0823 09:14:23.405970 688 log.go:172] (0xc0006e62c0) Data frame received for 1\nI0823 09:14:23.405996 688 log.go:172] (0xc0007345a0) (1) Data frame handling\nI0823 09:14:23.406041 688 log.go:172] (0xc0007345a0) (1) Data frame sent\nI0823 09:14:23.406148 688 log.go:172] (0xc0006e62c0) (0xc0007345a0) Stream removed, broadcasting: 1\nI0823 09:14:23.406196 688 log.go:172] (0xc0006e62c0) Go away received\nI0823 09:14:23.406331 688 log.go:172] (0xc0006e62c0) (0xc0007345a0) Stream removed, broadcasting: 1\nI0823 09:14:23.406346 688 log.go:172] (0xc0006e62c0) (0xc000354dc0) Stream removed, broadcasting: 3\nI0823 09:14:23.406351 688 log.go:172] (0xc0006e62c0) (0xc000734640) Stream removed, broadcasting: 5\n" Aug 23 09:14:23.410: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Aug 23 09:14:23.410: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Aug 23 09:14:23.410: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-f8kxm ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Aug 23 09:14:23.615: INFO: stderr: "I0823 09:14:23.545119 711 log.go:172] (0xc0001386e0) (0xc0005c52c0) Create stream\nI0823 09:14:23.545156 711 log.go:172] (0xc0001386e0) (0xc0005c52c0) Stream added, broadcasting: 1\nI0823 09:14:23.546641 711 log.go:172] (0xc0001386e0) Reply frame received for 1\nI0823 09:14:23.546677 711 log.go:172] (0xc0001386e0) (0xc0005c5360) Create stream\nI0823 09:14:23.546692 711 log.go:172] (0xc0001386e0) (0xc0005c5360) Stream added, broadcasting: 3\nI0823 09:14:23.547218 711 log.go:172] (0xc0001386e0) Reply frame received for 3\nI0823 09:14:23.547232 711 log.go:172] (0xc0001386e0) (0xc0005c5400) Create stream\nI0823 09:14:23.547237 711 log.go:172] (0xc0001386e0) (0xc0005c5400) Stream added, broadcasting: 5\nI0823 09:14:23.547703 711 log.go:172] (0xc0001386e0) Reply frame received for 5\nI0823 09:14:23.607565 711 log.go:172] (0xc0001386e0) Data frame received for 3\nI0823 09:14:23.607601 711 log.go:172] (0xc0005c5360) (3) Data frame handling\nI0823 09:14:23.607625 711 log.go:172] (0xc0005c5360) (3) Data frame sent\nI0823 09:14:23.607638 711 log.go:172] (0xc0001386e0) Data frame received for 3\nI0823 09:14:23.607649 711 log.go:172] (0xc0005c5360) (3) Data frame handling\nI0823 09:14:23.608946 711 log.go:172] (0xc0001386e0) Data frame received for 5\nI0823 09:14:23.608964 711 log.go:172] (0xc0005c5400) (5) Data frame handling\nI0823 09:14:23.609542 711 log.go:172] (0xc0001386e0) Data frame received for 1\nI0823 09:14:23.609621 711 log.go:172] (0xc0005c52c0) (1) Data frame handling\nI0823 09:14:23.609697 711 log.go:172] (0xc0005c52c0) (1) Data frame sent\nI0823 09:14:23.609760 711 log.go:172] (0xc0001386e0) (0xc0005c52c0) Stream removed, broadcasting: 1\nI0823 09:14:23.609786 711 log.go:172] (0xc0001386e0) Go away received\nI0823 09:14:23.610020 711 log.go:172] (0xc0001386e0) (0xc0005c52c0) Stream removed, broadcasting: 1\nI0823 09:14:23.610085 711 log.go:172] (0xc0001386e0) (0xc0005c5360) Stream removed, broadcasting: 3\nI0823 09:14:23.610133 711 log.go:172] (0xc0001386e0) (0xc0005c5400) Stream removed, broadcasting: 5\n" Aug 23 09:14:23.615: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Aug 23 09:14:23.615: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Aug 23 09:14:23.615: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-f8kxm ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Aug 23 09:14:23.786: INFO: stderr: "I0823 09:14:23.734481 733 log.go:172] (0xc00080e2c0) (0xc00066d360) Create stream\nI0823 09:14:23.734530 733 log.go:172] (0xc00080e2c0) (0xc00066d360) Stream added, broadcasting: 1\nI0823 09:14:23.736200 733 log.go:172] (0xc00080e2c0) Reply frame received for 1\nI0823 09:14:23.736227 733 log.go:172] (0xc00080e2c0) (0xc00037c000) Create stream\nI0823 09:14:23.736237 733 log.go:172] (0xc00080e2c0) (0xc00037c000) Stream added, broadcasting: 3\nI0823 09:14:23.737015 733 log.go:172] (0xc00080e2c0) Reply frame received for 3\nI0823 09:14:23.737050 733 log.go:172] (0xc00080e2c0) (0xc00066d400) Create stream\nI0823 09:14:23.737064 733 log.go:172] (0xc00080e2c0) (0xc00066d400) Stream added, broadcasting: 5\nI0823 09:14:23.737650 733 log.go:172] (0xc00080e2c0) Reply frame received for 5\nI0823 09:14:23.781797 733 log.go:172] (0xc00080e2c0) Data frame received for 5\nI0823 09:14:23.781812 733 log.go:172] (0xc00066d400) (5) Data frame handling\nI0823 09:14:23.781824 733 log.go:172] (0xc00080e2c0) Data frame received for 3\nI0823 09:14:23.781828 733 log.go:172] (0xc00037c000) (3) Data frame handling\nI0823 09:14:23.781834 733 log.go:172] (0xc00037c000) (3) Data frame sent\nI0823 09:14:23.781868 733 log.go:172] (0xc00080e2c0) Data frame received for 3\nI0823 09:14:23.781877 733 log.go:172] (0xc00037c000) (3) Data frame handling\nI0823 09:14:23.782933 733 log.go:172] (0xc00080e2c0) Data frame received for 1\nI0823 09:14:23.782952 733 log.go:172] (0xc00066d360) (1) Data frame handling\nI0823 09:14:23.782959 733 log.go:172] (0xc00066d360) (1) Data frame sent\nI0823 09:14:23.782975 733 log.go:172] (0xc00080e2c0) (0xc00066d360) Stream removed, broadcasting: 1\nI0823 09:14:23.782988 733 log.go:172] (0xc00080e2c0) Go away received\nI0823 09:14:23.783133 733 log.go:172] (0xc00080e2c0) (0xc00066d360) Stream removed, broadcasting: 1\nI0823 09:14:23.783144 733 log.go:172] (0xc00080e2c0) (0xc00037c000) Stream removed, broadcasting: 3\nI0823 09:14:23.783149 733 log.go:172] (0xc00080e2c0) (0xc00066d400) Stream removed, broadcasting: 5\n" Aug 23 09:14:23.786: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Aug 23 09:14:23.786: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Aug 23 09:14:23.786: INFO: Scaling statefulset ss to 0 STEP: Verifying that stateful set ss was scaled down in reverse order [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85 Aug 23 09:14:53.798: INFO: Deleting all statefulset in ns e2e-tests-statefulset-f8kxm Aug 23 09:14:53.801: INFO: Scaling statefulset ss to 0 Aug 23 09:14:53.807: INFO: Waiting for statefulset status.replicas updated to 0 Aug 23 09:14:53.809: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 23 09:14:53.819: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-statefulset-f8kxm" for this suite. Aug 23 09:15:01.869: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 23 09:15:01.928: INFO: namespace: e2e-tests-statefulset-f8kxm, resource: bindings, ignored listing per whitelist Aug 23 09:15:01.955: INFO: namespace e2e-tests-statefulset-f8kxm deletion completed in 8.110026715s • [SLOW TEST:112.241 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 23 09:15:01.955: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Aug 23 09:15:02.075: INFO: Waiting up to 5m0s for pod "downwardapi-volume-1fef7ca5-e521-11ea-87d5-0242ac11000a" in namespace "e2e-tests-projected-q2dhh" to be "success or failure" Aug 23 09:15:02.098: INFO: Pod "downwardapi-volume-1fef7ca5-e521-11ea-87d5-0242ac11000a": Phase="Pending", Reason="", readiness=false. Elapsed: 22.362621ms Aug 23 09:15:04.330: INFO: Pod "downwardapi-volume-1fef7ca5-e521-11ea-87d5-0242ac11000a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.254590394s Aug 23 09:15:06.334: INFO: Pod "downwardapi-volume-1fef7ca5-e521-11ea-87d5-0242ac11000a": Phase="Running", Reason="", readiness=true. Elapsed: 4.258434724s Aug 23 09:15:08.337: INFO: Pod "downwardapi-volume-1fef7ca5-e521-11ea-87d5-0242ac11000a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.261760681s STEP: Saw pod success Aug 23 09:15:08.337: INFO: Pod "downwardapi-volume-1fef7ca5-e521-11ea-87d5-0242ac11000a" satisfied condition "success or failure" Aug 23 09:15:08.339: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-1fef7ca5-e521-11ea-87d5-0242ac11000a container client-container: STEP: delete the pod Aug 23 09:15:08.390: INFO: Waiting for pod downwardapi-volume-1fef7ca5-e521-11ea-87d5-0242ac11000a to disappear Aug 23 09:15:08.403: INFO: Pod downwardapi-volume-1fef7ca5-e521-11ea-87d5-0242ac11000a no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 23 09:15:08.403: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-q2dhh" for this suite. Aug 23 09:15:14.425: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 23 09:15:14.474: INFO: namespace: e2e-tests-projected-q2dhh, resource: bindings, ignored listing per whitelist Aug 23 09:15:14.480: INFO: namespace e2e-tests-projected-q2dhh deletion completed in 6.075326823s • [SLOW TEST:12.525 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 23 09:15:14.480: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Aug 23 09:15:22.743: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Aug 23 09:15:22.753: INFO: Pod pod-with-prestop-http-hook still exists Aug 23 09:15:24.754: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Aug 23 09:15:24.757: INFO: Pod pod-with-prestop-http-hook still exists Aug 23 09:15:26.754: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Aug 23 09:15:26.757: INFO: Pod pod-with-prestop-http-hook still exists Aug 23 09:15:28.754: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Aug 23 09:15:28.766: INFO: Pod pod-with-prestop-http-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 23 09:15:28.772: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-6qn5b" for this suite. Aug 23 09:15:50.788: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 23 09:15:50.800: INFO: namespace: e2e-tests-container-lifecycle-hook-6qn5b, resource: bindings, ignored listing per whitelist Aug 23 09:15:50.907: INFO: namespace e2e-tests-container-lifecycle-hook-6qn5b deletion completed in 22.131685399s • [SLOW TEST:36.427 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40 should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 23 09:15:50.907: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-mpnps Aug 23 09:15:55.035: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-mpnps STEP: checking the pod's current state and verifying that restartCount is present Aug 23 09:15:55.038: INFO: Initial restart count of pod liveness-http is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 23 09:19:56.299: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-mpnps" for this suite. Aug 23 09:20:02.377: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 23 09:20:02.447: INFO: namespace: e2e-tests-container-probe-mpnps, resource: bindings, ignored listing per whitelist Aug 23 09:20:02.457: INFO: namespace e2e-tests-container-probe-mpnps deletion completed in 6.136335838s • [SLOW TEST:251.550 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 23 09:20:02.458: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-d30ceb8c-e521-11ea-87d5-0242ac11000a STEP: Creating a pod to test consume configMaps Aug 23 09:20:02.595: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-d30f0161-e521-11ea-87d5-0242ac11000a" in namespace "e2e-tests-projected-c98s4" to be "success or failure" Aug 23 09:20:02.599: INFO: Pod "pod-projected-configmaps-d30f0161-e521-11ea-87d5-0242ac11000a": Phase="Pending", Reason="", readiness=false. Elapsed: 3.637085ms Aug 23 09:20:04.603: INFO: Pod "pod-projected-configmaps-d30f0161-e521-11ea-87d5-0242ac11000a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007646754s Aug 23 09:20:06.608: INFO: Pod "pod-projected-configmaps-d30f0161-e521-11ea-87d5-0242ac11000a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012212443s STEP: Saw pod success Aug 23 09:20:06.608: INFO: Pod "pod-projected-configmaps-d30f0161-e521-11ea-87d5-0242ac11000a" satisfied condition "success or failure" Aug 23 09:20:06.611: INFO: Trying to get logs from node hunter-worker pod pod-projected-configmaps-d30f0161-e521-11ea-87d5-0242ac11000a container projected-configmap-volume-test: STEP: delete the pod Aug 23 09:20:06.638: INFO: Waiting for pod pod-projected-configmaps-d30f0161-e521-11ea-87d5-0242ac11000a to disappear Aug 23 09:20:06.656: INFO: Pod pod-projected-configmaps-d30f0161-e521-11ea-87d5-0242ac11000a no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 23 09:20:06.656: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-c98s4" for this suite. Aug 23 09:20:12.675: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 23 09:20:12.738: INFO: namespace: e2e-tests-projected-c98s4, resource: bindings, ignored listing per whitelist Aug 23 09:20:12.799: INFO: namespace e2e-tests-projected-c98s4 deletion completed in 6.138821976s • [SLOW TEST:10.341 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 23 09:20:12.799: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with secret that has name projected-secret-test-map-d937e200-e521-11ea-87d5-0242ac11000a STEP: Creating a pod to test consume secrets Aug 23 09:20:12.944: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-d93a3c4b-e521-11ea-87d5-0242ac11000a" in namespace "e2e-tests-projected-tp846" to be "success or failure" Aug 23 09:20:12.948: INFO: Pod "pod-projected-secrets-d93a3c4b-e521-11ea-87d5-0242ac11000a": Phase="Pending", Reason="", readiness=false. Elapsed: 3.507835ms Aug 23 09:20:14.953: INFO: Pod "pod-projected-secrets-d93a3c4b-e521-11ea-87d5-0242ac11000a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008747414s Aug 23 09:20:16.956: INFO: Pod "pod-projected-secrets-d93a3c4b-e521-11ea-87d5-0242ac11000a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011928351s STEP: Saw pod success Aug 23 09:20:16.956: INFO: Pod "pod-projected-secrets-d93a3c4b-e521-11ea-87d5-0242ac11000a" satisfied condition "success or failure" Aug 23 09:20:16.959: INFO: Trying to get logs from node hunter-worker2 pod pod-projected-secrets-d93a3c4b-e521-11ea-87d5-0242ac11000a container projected-secret-volume-test: STEP: delete the pod Aug 23 09:20:16.980: INFO: Waiting for pod pod-projected-secrets-d93a3c4b-e521-11ea-87d5-0242ac11000a to disappear Aug 23 09:20:16.985: INFO: Pod pod-projected-secrets-d93a3c4b-e521-11ea-87d5-0242ac11000a no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 23 09:20:16.985: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-tp846" for this suite. Aug 23 09:20:23.048: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 23 09:20:23.093: INFO: namespace: e2e-tests-projected-tp846, resource: bindings, ignored listing per whitelist Aug 23 09:20:23.129: INFO: namespace e2e-tests-projected-tp846 deletion completed in 6.141364196s • [SLOW TEST:10.330 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 23 09:20:23.129: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-9hmr2 STEP: creating a selector STEP: Creating the service pods in kubernetes Aug 23 09:20:23.254: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Aug 23 09:20:53.485: INFO: ExecWithOptions {Command:[/bin/sh -c echo 'hostName' | nc -w 1 -u 10.244.1.15 8081 | grep -v '^\s*$'] Namespace:e2e-tests-pod-network-test-9hmr2 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 23 09:20:53.486: INFO: >>> kubeConfig: /root/.kube/config I0823 09:20:53.525592 6 log.go:172] (0xc0020ac2c0) (0xc0010cb4a0) Create stream I0823 09:20:53.525625 6 log.go:172] (0xc0020ac2c0) (0xc0010cb4a0) Stream added, broadcasting: 1 I0823 09:20:53.527848 6 log.go:172] (0xc0020ac2c0) Reply frame received for 1 I0823 09:20:53.527880 6 log.go:172] (0xc0020ac2c0) (0xc001ffb7c0) Create stream I0823 09:20:53.527897 6 log.go:172] (0xc0020ac2c0) (0xc001ffb7c0) Stream added, broadcasting: 3 I0823 09:20:53.528688 6 log.go:172] (0xc0020ac2c0) Reply frame received for 3 I0823 09:20:53.528708 6 log.go:172] (0xc0020ac2c0) (0xc000d30dc0) Create stream I0823 09:20:53.528713 6 log.go:172] (0xc0020ac2c0) (0xc000d30dc0) Stream added, broadcasting: 5 I0823 09:20:53.529952 6 log.go:172] (0xc0020ac2c0) Reply frame received for 5 I0823 09:20:54.610314 6 log.go:172] (0xc0020ac2c0) Data frame received for 3 I0823 09:20:54.610352 6 log.go:172] (0xc001ffb7c0) (3) Data frame handling I0823 09:20:54.610368 6 log.go:172] (0xc001ffb7c0) (3) Data frame sent I0823 09:20:54.610377 6 log.go:172] (0xc0020ac2c0) Data frame received for 3 I0823 09:20:54.610384 6 log.go:172] (0xc001ffb7c0) (3) Data frame handling I0823 09:20:54.610671 6 log.go:172] (0xc0020ac2c0) Data frame received for 5 I0823 09:20:54.610704 6 log.go:172] (0xc000d30dc0) (5) Data frame handling I0823 09:20:54.614115 6 log.go:172] (0xc0020ac2c0) Data frame received for 1 I0823 09:20:54.614153 6 log.go:172] (0xc0010cb4a0) (1) Data frame handling I0823 09:20:54.614185 6 log.go:172] (0xc0010cb4a0) (1) Data frame sent I0823 09:20:54.614209 6 log.go:172] (0xc0020ac2c0) (0xc0010cb4a0) Stream removed, broadcasting: 1 I0823 09:20:54.614238 6 log.go:172] (0xc0020ac2c0) Go away received I0823 09:20:54.614450 6 log.go:172] (0xc0020ac2c0) (0xc0010cb4a0) Stream removed, broadcasting: 1 I0823 09:20:54.614474 6 log.go:172] (0xc0020ac2c0) (0xc001ffb7c0) Stream removed, broadcasting: 3 I0823 09:20:54.614486 6 log.go:172] (0xc0020ac2c0) (0xc000d30dc0) Stream removed, broadcasting: 5 Aug 23 09:20:54.614: INFO: Found all expected endpoints: [netserver-0] Aug 23 09:20:54.618: INFO: ExecWithOptions {Command:[/bin/sh -c echo 'hostName' | nc -w 1 -u 10.244.2.23 8081 | grep -v '^\s*$'] Namespace:e2e-tests-pod-network-test-9hmr2 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 23 09:20:54.618: INFO: >>> kubeConfig: /root/.kube/config I0823 09:20:54.655075 6 log.go:172] (0xc0021e62c0) (0xc001ffb9a0) Create stream I0823 09:20:54.655117 6 log.go:172] (0xc0021e62c0) (0xc001ffb9a0) Stream added, broadcasting: 1 I0823 09:20:54.661797 6 log.go:172] (0xc0021e62c0) Reply frame received for 1 I0823 09:20:54.661863 6 log.go:172] (0xc0021e62c0) (0xc001bc4000) Create stream I0823 09:20:54.661876 6 log.go:172] (0xc0021e62c0) (0xc001bc4000) Stream added, broadcasting: 3 I0823 09:20:54.662742 6 log.go:172] (0xc0021e62c0) Reply frame received for 3 I0823 09:20:54.662796 6 log.go:172] (0xc0021e62c0) (0xc001bc40a0) Create stream I0823 09:20:54.662830 6 log.go:172] (0xc0021e62c0) (0xc001bc40a0) Stream added, broadcasting: 5 I0823 09:20:54.663630 6 log.go:172] (0xc0021e62c0) Reply frame received for 5 I0823 09:20:55.719472 6 log.go:172] (0xc0021e62c0) Data frame received for 5 I0823 09:20:55.719531 6 log.go:172] (0xc001bc40a0) (5) Data frame handling I0823 09:20:55.719571 6 log.go:172] (0xc0021e62c0) Data frame received for 3 I0823 09:20:55.719623 6 log.go:172] (0xc001bc4000) (3) Data frame handling I0823 09:20:55.719651 6 log.go:172] (0xc001bc4000) (3) Data frame sent I0823 09:20:55.719731 6 log.go:172] (0xc0021e62c0) Data frame received for 3 I0823 09:20:55.719765 6 log.go:172] (0xc001bc4000) (3) Data frame handling I0823 09:20:55.721984 6 log.go:172] (0xc0021e62c0) Data frame received for 1 I0823 09:20:55.722017 6 log.go:172] (0xc001ffb9a0) (1) Data frame handling I0823 09:20:55.722047 6 log.go:172] (0xc001ffb9a0) (1) Data frame sent I0823 09:20:55.722074 6 log.go:172] (0xc0021e62c0) (0xc001ffb9a0) Stream removed, broadcasting: 1 I0823 09:20:55.722126 6 log.go:172] (0xc0021e62c0) Go away received I0823 09:20:55.722200 6 log.go:172] (0xc0021e62c0) (0xc001ffb9a0) Stream removed, broadcasting: 1 I0823 09:20:55.722228 6 log.go:172] (0xc0021e62c0) (0xc001bc4000) Stream removed, broadcasting: 3 I0823 09:20:55.722252 6 log.go:172] (0xc0021e62c0) (0xc001bc40a0) Stream removed, broadcasting: 5 Aug 23 09:20:55.722: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 23 09:20:55.722: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pod-network-test-9hmr2" for this suite. Aug 23 09:21:21.748: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 23 09:21:21.760: INFO: namespace: e2e-tests-pod-network-test-9hmr2, resource: bindings, ignored listing per whitelist Aug 23 09:21:21.912: INFO: namespace e2e-tests-pod-network-test-9hmr2 deletion completed in 26.184962081s • [SLOW TEST:58.783 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for node-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 23 09:21:21.912: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with projected pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod pod-subpath-test-projected-k7dn STEP: Creating a pod to test atomic-volume-subpath Aug 23 09:21:22.514: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-k7dn" in namespace "e2e-tests-subpath-hg2q2" to be "success or failure" Aug 23 09:21:22.646: INFO: Pod "pod-subpath-test-projected-k7dn": Phase="Pending", Reason="", readiness=false. Elapsed: 131.212383ms Aug 23 09:21:24.649: INFO: Pod "pod-subpath-test-projected-k7dn": Phase="Pending", Reason="", readiness=false. Elapsed: 2.134031363s Aug 23 09:21:26.652: INFO: Pod "pod-subpath-test-projected-k7dn": Phase="Pending", Reason="", readiness=false. Elapsed: 4.137245666s Aug 23 09:21:28.655: INFO: Pod "pod-subpath-test-projected-k7dn": Phase="Pending", Reason="", readiness=false. Elapsed: 6.140296558s Aug 23 09:21:30.659: INFO: Pod "pod-subpath-test-projected-k7dn": Phase="Running", Reason="", readiness=false. Elapsed: 8.144585504s Aug 23 09:21:32.662: INFO: Pod "pod-subpath-test-projected-k7dn": Phase="Running", Reason="", readiness=false. Elapsed: 10.147615894s Aug 23 09:21:34.665: INFO: Pod "pod-subpath-test-projected-k7dn": Phase="Running", Reason="", readiness=false. Elapsed: 12.150775358s Aug 23 09:21:36.669: INFO: Pod "pod-subpath-test-projected-k7dn": Phase="Running", Reason="", readiness=false. Elapsed: 14.154402752s Aug 23 09:21:38.672: INFO: Pod "pod-subpath-test-projected-k7dn": Phase="Running", Reason="", readiness=false. Elapsed: 16.157543286s Aug 23 09:21:40.676: INFO: Pod "pod-subpath-test-projected-k7dn": Phase="Running", Reason="", readiness=false. Elapsed: 18.161999981s Aug 23 09:21:42.680: INFO: Pod "pod-subpath-test-projected-k7dn": Phase="Running", Reason="", readiness=false. Elapsed: 20.16513317s Aug 23 09:21:44.683: INFO: Pod "pod-subpath-test-projected-k7dn": Phase="Running", Reason="", readiness=false. Elapsed: 22.168152039s Aug 23 09:21:46.686: INFO: Pod "pod-subpath-test-projected-k7dn": Phase="Running", Reason="", readiness=false. Elapsed: 24.171604222s Aug 23 09:21:48.698: INFO: Pod "pod-subpath-test-projected-k7dn": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.183134969s STEP: Saw pod success Aug 23 09:21:48.698: INFO: Pod "pod-subpath-test-projected-k7dn" satisfied condition "success or failure" Aug 23 09:21:48.699: INFO: Trying to get logs from node hunter-worker2 pod pod-subpath-test-projected-k7dn container test-container-subpath-projected-k7dn: STEP: delete the pod Aug 23 09:21:48.726: INFO: Waiting for pod pod-subpath-test-projected-k7dn to disappear Aug 23 09:21:48.729: INFO: Pod pod-subpath-test-projected-k7dn no longer exists STEP: Deleting pod pod-subpath-test-projected-k7dn Aug 23 09:21:48.729: INFO: Deleting pod "pod-subpath-test-projected-k7dn" in namespace "e2e-tests-subpath-hg2q2" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 23 09:21:48.730: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-subpath-hg2q2" for this suite. Aug 23 09:21:54.762: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 23 09:21:54.792: INFO: namespace: e2e-tests-subpath-hg2q2, resource: bindings, ignored listing per whitelist Aug 23 09:21:54.815: INFO: namespace e2e-tests-subpath-hg2q2 deletion completed in 6.082621411s • [SLOW TEST:32.903 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with projected pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 23 09:21:54.816: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0777 on tmpfs Aug 23 09:21:54.939: INFO: Waiting up to 5m0s for pod "pod-15ff0005-e522-11ea-87d5-0242ac11000a" in namespace "e2e-tests-emptydir-ddrkr" to be "success or failure" Aug 23 09:21:54.957: INFO: Pod "pod-15ff0005-e522-11ea-87d5-0242ac11000a": Phase="Pending", Reason="", readiness=false. Elapsed: 18.710785ms Aug 23 09:21:56.961: INFO: Pod "pod-15ff0005-e522-11ea-87d5-0242ac11000a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022020985s Aug 23 09:21:58.964: INFO: Pod "pod-15ff0005-e522-11ea-87d5-0242ac11000a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024988893s STEP: Saw pod success Aug 23 09:21:58.964: INFO: Pod "pod-15ff0005-e522-11ea-87d5-0242ac11000a" satisfied condition "success or failure" Aug 23 09:21:58.966: INFO: Trying to get logs from node hunter-worker2 pod pod-15ff0005-e522-11ea-87d5-0242ac11000a container test-container: STEP: delete the pod Aug 23 09:21:58.981: INFO: Waiting for pod pod-15ff0005-e522-11ea-87d5-0242ac11000a to disappear Aug 23 09:21:58.988: INFO: Pod pod-15ff0005-e522-11ea-87d5-0242ac11000a no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 23 09:21:58.988: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-ddrkr" for this suite. Aug 23 09:22:04.995: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 23 09:22:05.040: INFO: namespace: e2e-tests-emptydir-ddrkr, resource: bindings, ignored listing per whitelist Aug 23 09:22:05.119: INFO: namespace e2e-tests-emptydir-ddrkr deletion completed in 6.129739206s • [SLOW TEST:10.304 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 23 09:22:05.120: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating the pod Aug 23 09:22:11.807: INFO: Successfully updated pod "annotationupdate1c2e00d3-e522-11ea-87d5-0242ac11000a" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 23 09:22:13.844: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-84lgd" for this suite. Aug 23 09:22:35.864: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 23 09:22:35.878: INFO: namespace: e2e-tests-projected-84lgd, resource: bindings, ignored listing per whitelist Aug 23 09:22:35.922: INFO: namespace e2e-tests-projected-84lgd deletion completed in 22.075918175s • [SLOW TEST:30.803 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 23 09:22:35.923: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-map-2e7ec167-e522-11ea-87d5-0242ac11000a STEP: Creating a pod to test consume configMaps Aug 23 09:22:36.039: INFO: Waiting up to 5m0s for pod "pod-configmaps-2e81a4da-e522-11ea-87d5-0242ac11000a" in namespace "e2e-tests-configmap-658sz" to be "success or failure" Aug 23 09:22:36.058: INFO: Pod "pod-configmaps-2e81a4da-e522-11ea-87d5-0242ac11000a": Phase="Pending", Reason="", readiness=false. Elapsed: 18.278955ms Aug 23 09:22:38.061: INFO: Pod "pod-configmaps-2e81a4da-e522-11ea-87d5-0242ac11000a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021286513s Aug 23 09:22:40.063: INFO: Pod "pod-configmaps-2e81a4da-e522-11ea-87d5-0242ac11000a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024058358s STEP: Saw pod success Aug 23 09:22:40.063: INFO: Pod "pod-configmaps-2e81a4da-e522-11ea-87d5-0242ac11000a" satisfied condition "success or failure" Aug 23 09:22:40.065: INFO: Trying to get logs from node hunter-worker2 pod pod-configmaps-2e81a4da-e522-11ea-87d5-0242ac11000a container configmap-volume-test: STEP: delete the pod Aug 23 09:22:40.090: INFO: Waiting for pod pod-configmaps-2e81a4da-e522-11ea-87d5-0242ac11000a to disappear Aug 23 09:22:40.144: INFO: Pod pod-configmaps-2e81a4da-e522-11ea-87d5-0242ac11000a no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 23 09:22:40.144: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-658sz" for this suite. Aug 23 09:22:46.204: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 23 09:22:46.238: INFO: namespace: e2e-tests-configmap-658sz, resource: bindings, ignored listing per whitelist Aug 23 09:22:46.262: INFO: namespace e2e-tests-configmap-658sz deletion completed in 6.114442358s • [SLOW TEST:10.339 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 23 09:22:46.262: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Aug 23 09:22:46.379: INFO: Waiting up to 5m0s for pod "downwardapi-volume-34ae49b9-e522-11ea-87d5-0242ac11000a" in namespace "e2e-tests-projected-77dqz" to be "success or failure" Aug 23 09:22:46.383: INFO: Pod "downwardapi-volume-34ae49b9-e522-11ea-87d5-0242ac11000a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.290284ms Aug 23 09:22:48.387: INFO: Pod "downwardapi-volume-34ae49b9-e522-11ea-87d5-0242ac11000a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007930631s Aug 23 09:22:50.390: INFO: Pod "downwardapi-volume-34ae49b9-e522-11ea-87d5-0242ac11000a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010807772s STEP: Saw pod success Aug 23 09:22:50.390: INFO: Pod "downwardapi-volume-34ae49b9-e522-11ea-87d5-0242ac11000a" satisfied condition "success or failure" Aug 23 09:22:50.392: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-34ae49b9-e522-11ea-87d5-0242ac11000a container client-container: STEP: delete the pod Aug 23 09:22:50.460: INFO: Waiting for pod downwardapi-volume-34ae49b9-e522-11ea-87d5-0242ac11000a to disappear Aug 23 09:22:50.474: INFO: Pod downwardapi-volume-34ae49b9-e522-11ea-87d5-0242ac11000a no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 23 09:22:50.474: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-77dqz" for this suite. Aug 23 09:22:56.488: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 23 09:22:56.513: INFO: namespace: e2e-tests-projected-77dqz, resource: bindings, ignored listing per whitelist Aug 23 09:22:56.551: INFO: namespace e2e-tests-projected-77dqz deletion completed in 6.073647036s • [SLOW TEST:10.289 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 23 09:22:56.551: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-3b1fe940-e522-11ea-87d5-0242ac11000a STEP: Creating a pod to test consume configMaps Aug 23 09:22:57.306: INFO: Waiting up to 5m0s for pod "pod-configmaps-3b272745-e522-11ea-87d5-0242ac11000a" in namespace "e2e-tests-configmap-fh4px" to be "success or failure" Aug 23 09:22:57.354: INFO: Pod "pod-configmaps-3b272745-e522-11ea-87d5-0242ac11000a": Phase="Pending", Reason="", readiness=false. Elapsed: 48.220739ms Aug 23 09:22:59.366: INFO: Pod "pod-configmaps-3b272745-e522-11ea-87d5-0242ac11000a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.060176435s Aug 23 09:23:01.471: INFO: Pod "pod-configmaps-3b272745-e522-11ea-87d5-0242ac11000a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.164895955s STEP: Saw pod success Aug 23 09:23:01.471: INFO: Pod "pod-configmaps-3b272745-e522-11ea-87d5-0242ac11000a" satisfied condition "success or failure" Aug 23 09:23:01.473: INFO: Trying to get logs from node hunter-worker2 pod pod-configmaps-3b272745-e522-11ea-87d5-0242ac11000a container configmap-volume-test: STEP: delete the pod Aug 23 09:23:01.760: INFO: Waiting for pod pod-configmaps-3b272745-e522-11ea-87d5-0242ac11000a to disappear Aug 23 09:23:01.964: INFO: Pod pod-configmaps-3b272745-e522-11ea-87d5-0242ac11000a no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 23 09:23:01.964: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-fh4px" for this suite. Aug 23 09:23:08.374: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 23 09:23:08.421: INFO: namespace: e2e-tests-configmap-fh4px, resource: bindings, ignored listing per whitelist Aug 23 09:23:08.434: INFO: namespace e2e-tests-configmap-fh4px deletion completed in 6.467032504s • [SLOW TEST:11.883 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 23 09:23:08.435: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should write entries to /etc/hosts [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 23 09:23:12.582: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubelet-test-rj4pp" for this suite. Aug 23 09:23:52.609: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 23 09:23:52.642: INFO: namespace: e2e-tests-kubelet-test-rj4pp, resource: bindings, ignored listing per whitelist Aug 23 09:23:52.664: INFO: namespace e2e-tests-kubelet-test-rj4pp deletion completed in 40.079449337s • [SLOW TEST:44.230 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when scheduling a busybox Pod with hostAliases /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:136 should write entries to /etc/hosts [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 23 09:23:52.665: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods STEP: Gathering metrics W0823 09:24:33.154044 6 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Aug 23 09:24:33.154: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 23 09:24:33.154: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-7bc7t" for this suite. Aug 23 09:24:43.175: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 23 09:24:43.210: INFO: namespace: e2e-tests-gc-7bc7t, resource: bindings, ignored listing per whitelist Aug 23 09:24:43.246: INFO: namespace e2e-tests-gc-7bc7t deletion completed in 10.088581332s • [SLOW TEST:50.581 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 23 09:24:43.246: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Aug 23 09:24:43.426: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 23 09:24:47.626: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-g2q4c" for this suite. Aug 23 09:25:29.655: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 23 09:25:29.672: INFO: namespace: e2e-tests-pods-g2q4c, resource: bindings, ignored listing per whitelist Aug 23 09:25:29.704: INFO: namespace e2e-tests-pods-g2q4c deletion completed in 42.074254741s • [SLOW TEST:46.458 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 23 09:25:29.704: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65 [It] deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Aug 23 09:25:29.856: INFO: Creating deployment "nginx-deployment" Aug 23 09:25:29.895: INFO: Waiting for observed generation 1 Aug 23 09:25:31.947: INFO: Waiting for all required pods to come up Aug 23 09:25:31.951: INFO: Pod name nginx: Found 10 pods out of 10 STEP: ensuring each pod is running Aug 23 09:25:44.156: INFO: Waiting for deployment "nginx-deployment" to complete Aug 23 09:25:44.180: INFO: Updating deployment "nginx-deployment" with a non-existent image Aug 23 09:25:44.185: INFO: Updating deployment nginx-deployment Aug 23 09:25:44.185: INFO: Waiting for observed generation 2 Aug 23 09:25:46.295: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 Aug 23 09:25:46.323: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 Aug 23 09:25:46.325: INFO: Waiting for the first rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas Aug 23 09:25:46.707: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 Aug 23 09:25:46.707: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 Aug 23 09:25:46.709: INFO: Waiting for the second rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas Aug 23 09:25:46.714: INFO: Verifying that deployment "nginx-deployment" has minimum required number of available replicas Aug 23 09:25:46.714: INFO: Scaling up the deployment "nginx-deployment" from 10 to 30 Aug 23 09:25:46.719: INFO: Updating deployment nginx-deployment Aug 23 09:25:46.719: INFO: Waiting for the replicasets of deployment "nginx-deployment" to have desired number of replicas Aug 23 09:25:47.216: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 Aug 23 09:25:47.418: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59 Aug 23 09:25:47.666: INFO: Deployment "nginx-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment,GenerateName:,Namespace:e2e-tests-deployment-nlb2n,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-nlb2n/deployments/nginx-deployment,UID:96205a1e-e522-11ea-a485-0242ac120004,ResourceVersion:1677459,Generation:3,CreationTimestamp:2020-08-23 09:25:29 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*30,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:13,UpdatedReplicas:5,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[{Progressing True 2020-08-23 09:25:44 +0000 UTC 2020-08-23 09:25:29 +0000 UTC ReplicaSetUpdated ReplicaSet "nginx-deployment-5c98f8fb5" is progressing.} {Available False 2020-08-23 09:25:47 +0000 UTC 2020-08-23 09:25:47 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.}],ReadyReplicas:8,CollisionCount:nil,},} Aug 23 09:25:47.833: INFO: New ReplicaSet "nginx-deployment-5c98f8fb5" of Deployment "nginx-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5,GenerateName:,Namespace:e2e-tests-deployment-nlb2n,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-nlb2n/replicasets/nginx-deployment-5c98f8fb5,UID:9eaad62d-e522-11ea-a485-0242ac120004,ResourceVersion:1677501,Generation:3,CreationTimestamp:2020-08-23 09:25:44 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment nginx-deployment 96205a1e-e522-11ea-a485-0242ac120004 0xc000d7a537 0xc000d7a538}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*13,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:5,FullyLabeledReplicas:5,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Aug 23 09:25:47.833: INFO: All old ReplicaSets of Deployment "nginx-deployment": Aug 23 09:25:47.833: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d,GenerateName:,Namespace:e2e-tests-deployment-nlb2n,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-nlb2n/replicasets/nginx-deployment-85ddf47c5d,UID:962886af-e522-11ea-a485-0242ac120004,ResourceVersion:1677497,Generation:3,CreationTimestamp:2020-08-23 09:25:29 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment nginx-deployment 96205a1e-e522-11ea-a485-0242ac120004 0xc000d7a687 0xc000d7a688}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*20,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:8,FullyLabeledReplicas:8,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[],},} Aug 23 09:25:47.903: INFO: Pod "nginx-deployment-5c98f8fb5-46jch" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-46jch,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-nlb2n,SelfLink:/api/v1/namespaces/e2e-tests-deployment-nlb2n/pods/nginx-deployment-5c98f8fb5-46jch,UID:a0bc64a2-e522-11ea-a485-0242ac120004,ResourceVersion:1677502,Generation:0,CreationTimestamp:2020-08-23 09:25:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 9eaad62d-e522-11ea-a485-0242ac120004 0xc00147f237 0xc00147f238}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ghx95 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ghx95,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-ghx95 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00147f2b0} {node.kubernetes.io/unreachable Exists NoExecute 0xc00147f2d0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-23 09:25:47 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Aug 23 09:25:47.903: INFO: Pod "nginx-deployment-5c98f8fb5-4dlt2" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-4dlt2,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-nlb2n,SelfLink:/api/v1/namespaces/e2e-tests-deployment-nlb2n/pods/nginx-deployment-5c98f8fb5-4dlt2,UID:a09cd695-e522-11ea-a485-0242ac120004,ResourceVersion:1677485,Generation:0,CreationTimestamp:2020-08-23 09:25:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 9eaad62d-e522-11ea-a485-0242ac120004 0xc00147f347 0xc00147f348}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ghx95 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ghx95,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-ghx95 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00147f3c0} {node.kubernetes.io/unreachable Exists NoExecute 0xc00147f3e0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-23 09:25:47 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Aug 23 09:25:47.903: INFO: Pod "nginx-deployment-5c98f8fb5-54rz8" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-54rz8,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-nlb2n,SelfLink:/api/v1/namespaces/e2e-tests-deployment-nlb2n/pods/nginx-deployment-5c98f8fb5-54rz8,UID:9ed7247b-e522-11ea-a485-0242ac120004,ResourceVersion:1677432,Generation:0,CreationTimestamp:2020-08-23 09:25:44 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 9eaad62d-e522-11ea-a485-0242ac120004 0xc00147f457 0xc00147f458}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ghx95 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ghx95,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-ghx95 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00147f4d0} {node.kubernetes.io/unreachable Exists NoExecute 0xc00147f4f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-23 09:25:44 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-23 09:25:44 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-23 09:25:44 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-23 09:25:44 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.8,PodIP:,StartTime:2020-08-23 09:25:44 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Aug 23 09:25:47.903: INFO: Pod "nginx-deployment-5c98f8fb5-bqxlg" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-bqxlg,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-nlb2n,SelfLink:/api/v1/namespaces/e2e-tests-deployment-nlb2n/pods/nginx-deployment-5c98f8fb5-bqxlg,UID:a0793f3c-e522-11ea-a485-0242ac120004,ResourceVersion:1677460,Generation:0,CreationTimestamp:2020-08-23 09:25:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 9eaad62d-e522-11ea-a485-0242ac120004 0xc00147f5b0 0xc00147f5b1}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ghx95 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ghx95,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-ghx95 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00147f630} {node.kubernetes.io/unreachable Exists NoExecute 0xc00147f650}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-23 09:25:47 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Aug 23 09:25:47.903: INFO: Pod "nginx-deployment-5c98f8fb5-bx4qn" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-bx4qn,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-nlb2n,SelfLink:/api/v1/namespaces/e2e-tests-deployment-nlb2n/pods/nginx-deployment-5c98f8fb5-bx4qn,UID:a09cd548-e522-11ea-a485-0242ac120004,ResourceVersion:1677496,Generation:0,CreationTimestamp:2020-08-23 09:25:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 9eaad62d-e522-11ea-a485-0242ac120004 0xc00147f6c7 0xc00147f6c8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ghx95 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ghx95,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-ghx95 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00147f740} {node.kubernetes.io/unreachable Exists NoExecute 0xc00147f760}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-23 09:25:47 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Aug 23 09:25:47.903: INFO: Pod "nginx-deployment-5c98f8fb5-fcx84" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-fcx84,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-nlb2n,SelfLink:/api/v1/namespaces/e2e-tests-deployment-nlb2n/pods/nginx-deployment-5c98f8fb5-fcx84,UID:9eabdac3-e522-11ea-a485-0242ac120004,ResourceVersion:1677408,Generation:0,CreationTimestamp:2020-08-23 09:25:44 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 9eaad62d-e522-11ea-a485-0242ac120004 0xc00147f7d7 0xc00147f7d8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ghx95 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ghx95,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-ghx95 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00147f8c0} {node.kubernetes.io/unreachable Exists NoExecute 0xc00147f8e0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-23 09:25:44 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-23 09:25:44 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-23 09:25:44 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-23 09:25:44 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.2,PodIP:,StartTime:2020-08-23 09:25:44 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Aug 23 09:25:47.903: INFO: Pod "nginx-deployment-5c98f8fb5-jn7qk" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-jn7qk,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-nlb2n,SelfLink:/api/v1/namespaces/e2e-tests-deployment-nlb2n/pods/nginx-deployment-5c98f8fb5-jn7qk,UID:9eace43d-e522-11ea-a485-0242ac120004,ResourceVersion:1677419,Generation:0,CreationTimestamp:2020-08-23 09:25:44 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 9eaad62d-e522-11ea-a485-0242ac120004 0xc00147f9e0 0xc00147f9e1}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ghx95 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ghx95,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-ghx95 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00147fa60} {node.kubernetes.io/unreachable Exists NoExecute 0xc00147fa80}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-23 09:25:44 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-23 09:25:44 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-23 09:25:44 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-23 09:25:44 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.2,PodIP:,StartTime:2020-08-23 09:25:44 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Aug 23 09:25:47.903: INFO: Pod "nginx-deployment-5c98f8fb5-k7lh5" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-k7lh5,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-nlb2n,SelfLink:/api/v1/namespaces/e2e-tests-deployment-nlb2n/pods/nginx-deployment-5c98f8fb5-k7lh5,UID:a0983d2c-e522-11ea-a485-0242ac120004,ResourceVersion:1677510,Generation:0,CreationTimestamp:2020-08-23 09:25:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 9eaad62d-e522-11ea-a485-0242ac120004 0xc00147fca0 0xc00147fca1}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ghx95 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ghx95,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-ghx95 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00147fd20} {node.kubernetes.io/unreachable Exists NoExecute 0xc00147fe40}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-23 09:25:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-23 09:25:47 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-23 09:25:47 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-23 09:25:47 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.8,PodIP:,StartTime:2020-08-23 09:25:47 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Aug 23 09:25:47.904: INFO: Pod "nginx-deployment-5c98f8fb5-ngb45" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-ngb45,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-nlb2n,SelfLink:/api/v1/namespaces/e2e-tests-deployment-nlb2n/pods/nginx-deployment-5c98f8fb5-ngb45,UID:a09cc52e-e522-11ea-a485-0242ac120004,ResourceVersion:1677487,Generation:0,CreationTimestamp:2020-08-23 09:25:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 9eaad62d-e522-11ea-a485-0242ac120004 0xc00147ff10 0xc00147ff11}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ghx95 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ghx95,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-ghx95 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0011da480} {node.kubernetes.io/unreachable Exists NoExecute 0xc0011da4a0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-23 09:25:47 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Aug 23 09:25:47.904: INFO: Pod "nginx-deployment-5c98f8fb5-rszjh" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-rszjh,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-nlb2n,SelfLink:/api/v1/namespaces/e2e-tests-deployment-nlb2n/pods/nginx-deployment-5c98f8fb5-rszjh,UID:9eacdc2d-e522-11ea-a485-0242ac120004,ResourceVersion:1677410,Generation:0,CreationTimestamp:2020-08-23 09:25:44 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 9eaad62d-e522-11ea-a485-0242ac120004 0xc0011da537 0xc0011da538}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ghx95 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ghx95,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-ghx95 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0011da930} {node.kubernetes.io/unreachable Exists NoExecute 0xc0011da980}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-23 09:25:44 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-23 09:25:44 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-23 09:25:44 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-23 09:25:44 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.8,PodIP:,StartTime:2020-08-23 09:25:44 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Aug 23 09:25:47.904: INFO: Pod "nginx-deployment-5c98f8fb5-x4qn2" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-x4qn2,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-nlb2n,SelfLink:/api/v1/namespaces/e2e-tests-deployment-nlb2n/pods/nginx-deployment-5c98f8fb5-x4qn2,UID:a09cde9e-e522-11ea-a485-0242ac120004,ResourceVersion:1677495,Generation:0,CreationTimestamp:2020-08-23 09:25:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 9eaad62d-e522-11ea-a485-0242ac120004 0xc0011db1f0 0xc0011db1f1}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ghx95 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ghx95,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-ghx95 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0011db430} {node.kubernetes.io/unreachable Exists NoExecute 0xc0011db480}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-23 09:25:47 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Aug 23 09:25:47.904: INFO: Pod "nginx-deployment-5c98f8fb5-x5pmr" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-x5pmr,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-nlb2n,SelfLink:/api/v1/namespaces/e2e-tests-deployment-nlb2n/pods/nginx-deployment-5c98f8fb5-x5pmr,UID:a09837c1-e522-11ea-a485-0242ac120004,ResourceVersion:1677475,Generation:0,CreationTimestamp:2020-08-23 09:25:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 9eaad62d-e522-11ea-a485-0242ac120004 0xc0011db4f7 0xc0011db4f8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ghx95 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ghx95,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-ghx95 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0011db8e0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0011db9f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-23 09:25:47 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Aug 23 09:25:47.904: INFO: Pod "nginx-deployment-5c98f8fb5-xg9sm" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-xg9sm,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-nlb2n,SelfLink:/api/v1/namespaces/e2e-tests-deployment-nlb2n/pods/nginx-deployment-5c98f8fb5-xg9sm,UID:9edba249-e522-11ea-a485-0242ac120004,ResourceVersion:1677435,Generation:0,CreationTimestamp:2020-08-23 09:25:44 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 9eaad62d-e522-11ea-a485-0242ac120004 0xc0011dba67 0xc0011dba68}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ghx95 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ghx95,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-ghx95 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0011dbbf0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0011dbc10}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-23 09:25:44 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-23 09:25:44 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-23 09:25:44 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-23 09:25:44 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.8,PodIP:,StartTime:2020-08-23 09:25:44 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Aug 23 09:25:47.904: INFO: Pod "nginx-deployment-85ddf47c5d-2d8kw" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-2d8kw,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-nlb2n,SelfLink:/api/v1/namespaces/e2e-tests-deployment-nlb2n/pods/nginx-deployment-85ddf47c5d-2d8kw,UID:a0987c89-e522-11ea-a485-0242ac120004,ResourceVersion:1677477,Generation:0,CreationTimestamp:2020-08-23 09:25:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 962886af-e522-11ea-a485-0242ac120004 0xc0011dbea0 0xc0011dbea1}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ghx95 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ghx95,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-ghx95 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc000efc7a0} {node.kubernetes.io/unreachable Exists NoExecute 0xc000efc800}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-23 09:25:47 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Aug 23 09:25:47.904: INFO: Pod "nginx-deployment-85ddf47c5d-47pkx" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-47pkx,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-nlb2n,SelfLink:/api/v1/namespaces/e2e-tests-deployment-nlb2n/pods/nginx-deployment-85ddf47c5d-47pkx,UID:a09ce280-e522-11ea-a485-0242ac120004,ResourceVersion:1677494,Generation:0,CreationTimestamp:2020-08-23 09:25:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 962886af-e522-11ea-a485-0242ac120004 0xc000efcb97 0xc000efcb98}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ghx95 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ghx95,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-ghx95 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc000efd4e0} {node.kubernetes.io/unreachable Exists NoExecute 0xc000efd510}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-23 09:25:47 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Aug 23 09:25:47.905: INFO: Pod "nginx-deployment-85ddf47c5d-5s59t" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-5s59t,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-nlb2n,SelfLink:/api/v1/namespaces/e2e-tests-deployment-nlb2n/pods/nginx-deployment-85ddf47c5d-5s59t,UID:a09cdc05-e522-11ea-a485-0242ac120004,ResourceVersion:1677492,Generation:0,CreationTimestamp:2020-08-23 09:25:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 962886af-e522-11ea-a485-0242ac120004 0xc000efd657 0xc000efd658}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ghx95 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ghx95,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-ghx95 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc000efdb70} {node.kubernetes.io/unreachable Exists NoExecute 0xc000efdd10}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-23 09:25:47 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Aug 23 09:25:47.905: INFO: Pod "nginx-deployment-85ddf47c5d-5v9qs" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-5v9qs,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-nlb2n,SelfLink:/api/v1/namespaces/e2e-tests-deployment-nlb2n/pods/nginx-deployment-85ddf47c5d-5v9qs,UID:a0793e17-e522-11ea-a485-0242ac120004,ResourceVersion:1677461,Generation:0,CreationTimestamp:2020-08-23 09:25:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 962886af-e522-11ea-a485-0242ac120004 0xc000efdf77 0xc000efdf78}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ghx95 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ghx95,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-ghx95 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc000532100} {node.kubernetes.io/unreachable Exists NoExecute 0xc000532160}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-23 09:25:47 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Aug 23 09:25:47.905: INFO: Pod "nginx-deployment-85ddf47c5d-7htpb" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-7htpb,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-nlb2n,SelfLink:/api/v1/namespaces/e2e-tests-deployment-nlb2n/pods/nginx-deployment-85ddf47c5d-7htpb,UID:a09cd1c6-e522-11ea-a485-0242ac120004,ResourceVersion:1677489,Generation:0,CreationTimestamp:2020-08-23 09:25:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 962886af-e522-11ea-a485-0242ac120004 0xc000532217 0xc000532218}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ghx95 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ghx95,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-ghx95 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0005325c0} {node.kubernetes.io/unreachable Exists NoExecute 0xc000532640}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-23 09:25:47 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Aug 23 09:25:47.905: INFO: Pod "nginx-deployment-85ddf47c5d-7qbcc" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-7qbcc,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-nlb2n,SelfLink:/api/v1/namespaces/e2e-tests-deployment-nlb2n/pods/nginx-deployment-85ddf47c5d-7qbcc,UID:a07941ef-e522-11ea-a485-0242ac120004,ResourceVersion:1677455,Generation:0,CreationTimestamp:2020-08-23 09:25:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 962886af-e522-11ea-a485-0242ac120004 0xc0005328e7 0xc0005328e8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ghx95 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ghx95,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-ghx95 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc000532bb0} {node.kubernetes.io/unreachable Exists NoExecute 0xc000532be0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-23 09:25:47 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Aug 23 09:25:47.905: INFO: Pod "nginx-deployment-85ddf47c5d-9c6nq" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-9c6nq,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-nlb2n,SelfLink:/api/v1/namespaces/e2e-tests-deployment-nlb2n/pods/nginx-deployment-85ddf47c5d-9c6nq,UID:a09878c4-e522-11ea-a485-0242ac120004,ResourceVersion:1677478,Generation:0,CreationTimestamp:2020-08-23 09:25:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 962886af-e522-11ea-a485-0242ac120004 0xc000532c67 0xc000532c68}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ghx95 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ghx95,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-ghx95 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc000532f60} {node.kubernetes.io/unreachable Exists NoExecute 0xc000532f90}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-23 09:25:47 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Aug 23 09:25:47.905: INFO: Pod "nginx-deployment-85ddf47c5d-9fpj4" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-9fpj4,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-nlb2n,SelfLink:/api/v1/namespaces/e2e-tests-deployment-nlb2n/pods/nginx-deployment-85ddf47c5d-9fpj4,UID:a09cd7dd-e522-11ea-a485-0242ac120004,ResourceVersion:1677490,Generation:0,CreationTimestamp:2020-08-23 09:25:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 962886af-e522-11ea-a485-0242ac120004 0xc000533067 0xc000533068}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ghx95 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ghx95,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-ghx95 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc000533120} {node.kubernetes.io/unreachable Exists NoExecute 0xc000533160}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-23 09:25:47 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Aug 23 09:25:47.905: INFO: Pod "nginx-deployment-85ddf47c5d-bqtfg" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-bqtfg,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-nlb2n,SelfLink:/api/v1/namespaces/e2e-tests-deployment-nlb2n/pods/nginx-deployment-85ddf47c5d-bqtfg,UID:965b0a34-e522-11ea-a485-0242ac120004,ResourceVersion:1677342,Generation:0,CreationTimestamp:2020-08-23 09:25:30 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 962886af-e522-11ea-a485-0242ac120004 0xc000533217 0xc000533218}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ghx95 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ghx95,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-ghx95 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc000533360} {node.kubernetes.io/unreachable Exists NoExecute 0xc000533380}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-23 09:25:30 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-08-23 09:25:40 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-08-23 09:25:40 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-23 09:25:30 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.8,PodIP:10.244.2.36,StartTime:2020-08-23 09:25:30 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-08-23 09:25:39 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://38a665163b58e7110b86e44b348b6390f91f6fe466f6dd620bc097c8f1b9c375}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Aug 23 09:25:47.905: INFO: Pod "nginx-deployment-85ddf47c5d-hlv4v" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-hlv4v,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-nlb2n,SelfLink:/api/v1/namespaces/e2e-tests-deployment-nlb2n/pods/nginx-deployment-85ddf47c5d-hlv4v,UID:a0987ce7-e522-11ea-a485-0242ac120004,ResourceVersion:1677479,Generation:0,CreationTimestamp:2020-08-23 09:25:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 962886af-e522-11ea-a485-0242ac120004 0xc0005334b7 0xc0005334b8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ghx95 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ghx95,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-ghx95 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0005335f0} {node.kubernetes.io/unreachable Exists NoExecute 0xc000533610}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-23 09:25:47 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Aug 23 09:25:47.905: INFO: Pod "nginx-deployment-85ddf47c5d-hrgp7" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-hrgp7,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-nlb2n,SelfLink:/api/v1/namespaces/e2e-tests-deployment-nlb2n/pods/nginx-deployment-85ddf47c5d-hrgp7,UID:962b4ec0-e522-11ea-a485-0242ac120004,ResourceVersion:1677331,Generation:0,CreationTimestamp:2020-08-23 09:25:29 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 962886af-e522-11ea-a485-0242ac120004 0xc0005336b7 0xc0005336b8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ghx95 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ghx95,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-ghx95 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc000533830} {node.kubernetes.io/unreachable Exists NoExecute 0xc000533850}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-23 09:25:30 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-08-23 09:25:38 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-08-23 09:25:38 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-23 09:25:29 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.8,PodIP:10.244.2.34,StartTime:2020-08-23 09:25:30 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-08-23 09:25:37 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://49b06bde48d7c5345df430f16fb78ce6bdf5477471db6821c2e81b3216001c7a}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Aug 23 09:25:47.906: INFO: Pod "nginx-deployment-85ddf47c5d-nqhht" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-nqhht,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-nlb2n,SelfLink:/api/v1/namespaces/e2e-tests-deployment-nlb2n/pods/nginx-deployment-85ddf47c5d-nqhht,UID:a09cda08-e522-11ea-a485-0242ac120004,ResourceVersion:1677491,Generation:0,CreationTimestamp:2020-08-23 09:25:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 962886af-e522-11ea-a485-0242ac120004 0xc000533947 0xc000533948}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ghx95 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ghx95,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-ghx95 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc000533a40} {node.kubernetes.io/unreachable Exists NoExecute 0xc000533a70}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-23 09:25:47 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Aug 23 09:25:47.906: INFO: Pod "nginx-deployment-85ddf47c5d-p5xwn" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-p5xwn,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-nlb2n,SelfLink:/api/v1/namespaces/e2e-tests-deployment-nlb2n/pods/nginx-deployment-85ddf47c5d-p5xwn,UID:9636817d-e522-11ea-a485-0242ac120004,ResourceVersion:1677351,Generation:0,CreationTimestamp:2020-08-23 09:25:30 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 962886af-e522-11ea-a485-0242ac120004 0xc000533b07 0xc000533b08}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ghx95 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ghx95,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-ghx95 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc000533b90} {node.kubernetes.io/unreachable Exists NoExecute 0xc000533bb0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-23 09:25:30 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-08-23 09:25:41 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-08-23 09:25:41 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-23 09:25:30 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.8,PodIP:10.244.2.35,StartTime:2020-08-23 09:25:30 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-08-23 09:25:40 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://8c69d23f4d6512f0fbd31b20e3896d971d786142afa27cadff992647097863c5}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Aug 23 09:25:47.906: INFO: Pod "nginx-deployment-85ddf47c5d-pz2mm" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-pz2mm,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-nlb2n,SelfLink:/api/v1/namespaces/e2e-tests-deployment-nlb2n/pods/nginx-deployment-85ddf47c5d-pz2mm,UID:963455c6-e522-11ea-a485-0242ac120004,ResourceVersion:1677343,Generation:0,CreationTimestamp:2020-08-23 09:25:29 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 962886af-e522-11ea-a485-0242ac120004 0xc000533ce7 0xc000533ce8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ghx95 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ghx95,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-ghx95 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc000533d70} {node.kubernetes.io/unreachable Exists NoExecute 0xc000533da0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-23 09:25:30 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-08-23 09:25:40 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-08-23 09:25:40 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-23 09:25:30 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.2,PodIP:10.244.1.25,StartTime:2020-08-23 09:25:30 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-08-23 09:25:39 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://1f5693fceb271a8979b7334dd98fac504f05a5732174d62316d1d43831403f27}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Aug 23 09:25:47.906: INFO: Pod "nginx-deployment-85ddf47c5d-pzkr5" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-pzkr5,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-nlb2n,SelfLink:/api/v1/namespaces/e2e-tests-deployment-nlb2n/pods/nginx-deployment-85ddf47c5d-pzkr5,UID:9636740a-e522-11ea-a485-0242ac120004,ResourceVersion:1677374,Generation:0,CreationTimestamp:2020-08-23 09:25:30 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 962886af-e522-11ea-a485-0242ac120004 0xc000533ec7 0xc000533ec8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ghx95 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ghx95,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-ghx95 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc000533f50} {node.kubernetes.io/unreachable Exists NoExecute 0xc000533f70}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-23 09:25:30 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-08-23 09:25:43 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-08-23 09:25:43 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-23 09:25:30 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.2,PodIP:10.244.1.27,StartTime:2020-08-23 09:25:30 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-08-23 09:25:42 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://438a2036ef7e70b00eee87e45e55c22bb0035a10d3dc63ea7782d8341d18a63b}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Aug 23 09:25:47.906: INFO: Pod "nginx-deployment-85ddf47c5d-snn25" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-snn25,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-nlb2n,SelfLink:/api/v1/namespaces/e2e-tests-deployment-nlb2n/pods/nginx-deployment-85ddf47c5d-snn25,UID:a04e212b-e522-11ea-a485-0242ac120004,ResourceVersion:1677499,Generation:0,CreationTimestamp:2020-08-23 09:25:46 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 962886af-e522-11ea-a485-0242ac120004 0xc0003247b7 0xc0003247b8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ghx95 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ghx95,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-ghx95 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0003249b0} {node.kubernetes.io/unreachable Exists NoExecute 0xc000324ae0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-23 09:25:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-23 09:25:47 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-23 09:25:47 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-23 09:25:47 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.2,PodIP:,StartTime:2020-08-23 09:25:47 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Aug 23 09:25:47.906: INFO: Pod "nginx-deployment-85ddf47c5d-t948l" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-t948l,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-nlb2n,SelfLink:/api/v1/namespaces/e2e-tests-deployment-nlb2n/pods/nginx-deployment-85ddf47c5d-t948l,UID:a0987591-e522-11ea-a485-0242ac120004,ResourceVersion:1677480,Generation:0,CreationTimestamp:2020-08-23 09:25:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 962886af-e522-11ea-a485-0242ac120004 0xc000325967 0xc000325968}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ghx95 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ghx95,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-ghx95 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00001a250} {node.kubernetes.io/unreachable Exists NoExecute 0xc00001a3a0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-23 09:25:47 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Aug 23 09:25:47.906: INFO: Pod "nginx-deployment-85ddf47c5d-t9fv5" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-t9fv5,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-nlb2n,SelfLink:/api/v1/namespaces/e2e-tests-deployment-nlb2n/pods/nginx-deployment-85ddf47c5d-t9fv5,UID:96367685-e522-11ea-a485-0242ac120004,ResourceVersion:1677364,Generation:0,CreationTimestamp:2020-08-23 09:25:30 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 962886af-e522-11ea-a485-0242ac120004 0xc00001a6b7 0xc00001a6b8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ghx95 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ghx95,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-ghx95 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00001a790} {node.kubernetes.io/unreachable Exists NoExecute 0xc00001a7b0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-23 09:25:30 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-08-23 09:25:42 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-08-23 09:25:42 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-23 09:25:30 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.8,PodIP:10.244.2.38,StartTime:2020-08-23 09:25:30 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-08-23 09:25:42 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://44917f3506053f8781760d45d11c7ac66bc18e1d38c903685c5f850f16c29c53}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Aug 23 09:25:47.906: INFO: Pod "nginx-deployment-85ddf47c5d-v5tgr" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-v5tgr,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-nlb2n,SelfLink:/api/v1/namespaces/e2e-tests-deployment-nlb2n/pods/nginx-deployment-85ddf47c5d-v5tgr,UID:965b1505-e522-11ea-a485-0242ac120004,ResourceVersion:1677368,Generation:0,CreationTimestamp:2020-08-23 09:25:30 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 962886af-e522-11ea-a485-0242ac120004 0xc00001aa27 0xc00001aa28}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ghx95 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ghx95,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-ghx95 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00001ab10} {node.kubernetes.io/unreachable Exists NoExecute 0xc00001ab30}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-23 09:25:30 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-08-23 09:25:42 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-08-23 09:25:42 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-23 09:25:30 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.8,PodIP:10.244.2.37,StartTime:2020-08-23 09:25:30 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-08-23 09:25:41 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://b75bf2c5f84eb21f968b919dcd7b899b4b65d224af58fd63b3549e0f9787131b}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Aug 23 09:25:47.906: INFO: Pod "nginx-deployment-85ddf47c5d-zjmjq" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-zjmjq,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-nlb2n,SelfLink:/api/v1/namespaces/e2e-tests-deployment-nlb2n/pods/nginx-deployment-85ddf47c5d-zjmjq,UID:96345954-e522-11ea-a485-0242ac120004,ResourceVersion:1677341,Generation:0,CreationTimestamp:2020-08-23 09:25:29 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 962886af-e522-11ea-a485-0242ac120004 0xc00001afc7 0xc00001afc8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-ghx95 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ghx95,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-ghx95 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00001b280} {node.kubernetes.io/unreachable Exists NoExecute 0xc00001b340}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-23 09:25:30 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-08-23 09:25:40 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-08-23 09:25:40 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-23 09:25:30 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.2,PodIP:10.244.1.26,StartTime:2020-08-23 09:25:30 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-08-23 09:25:39 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://5a47fbebae7614f40745ae7b40fdd329978f8f9cdf4cfb23f3a01a48fc3c242a}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 23 09:25:47.906: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-deployment-nlb2n" for this suite. Aug 23 09:26:26.522: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 23 09:26:26.579: INFO: namespace: e2e-tests-deployment-nlb2n, resource: bindings, ignored listing per whitelist Aug 23 09:26:26.595: INFO: namespace e2e-tests-deployment-nlb2n deletion completed in 38.587245638s • [SLOW TEST:56.892 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 23 09:26:26.596: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Aug 23 09:26:26.772: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b80a477e-e522-11ea-87d5-0242ac11000a" in namespace "e2e-tests-downward-api-8w5m5" to be "success or failure" Aug 23 09:26:26.776: INFO: Pod "downwardapi-volume-b80a477e-e522-11ea-87d5-0242ac11000a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.362702ms Aug 23 09:26:28.832: INFO: Pod "downwardapi-volume-b80a477e-e522-11ea-87d5-0242ac11000a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.060833166s Aug 23 09:26:30.928: INFO: Pod "downwardapi-volume-b80a477e-e522-11ea-87d5-0242ac11000a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.156657267s STEP: Saw pod success Aug 23 09:26:30.928: INFO: Pod "downwardapi-volume-b80a477e-e522-11ea-87d5-0242ac11000a" satisfied condition "success or failure" Aug 23 09:26:30.931: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-b80a477e-e522-11ea-87d5-0242ac11000a container client-container: STEP: delete the pod Aug 23 09:26:30.957: INFO: Waiting for pod downwardapi-volume-b80a477e-e522-11ea-87d5-0242ac11000a to disappear Aug 23 09:26:30.976: INFO: Pod downwardapi-volume-b80a477e-e522-11ea-87d5-0242ac11000a no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 23 09:26:30.976: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-8w5m5" for this suite. Aug 23 09:26:38.995: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 23 09:26:39.195: INFO: namespace: e2e-tests-downward-api-8w5m5, resource: bindings, ignored listing per whitelist Aug 23 09:26:39.200: INFO: namespace e2e-tests-downward-api-8w5m5 deletion completed in 8.220091237s • [SLOW TEST:12.604 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 23 09:26:39.200: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43 [It] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod Aug 23 09:26:39.333: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 23 09:26:47.473: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-init-container-656wb" for this suite. Aug 23 09:26:53.486: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 23 09:26:53.537: INFO: namespace: e2e-tests-init-container-656wb, resource: bindings, ignored listing per whitelist Aug 23 09:26:53.584: INFO: namespace e2e-tests-init-container-656wb deletion completed in 6.107441579s • [SLOW TEST:14.384 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [k8s.io] Probing container should have monotonically increasing restart count [Slow][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 23 09:26:53.585: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] should have monotonically increasing restart count [Slow][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-f2l9g Aug 23 09:26:57.727: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-f2l9g STEP: checking the pod's current state and verifying that restartCount is present Aug 23 09:26:57.730: INFO: Initial restart count of pod liveness-http is 0 Aug 23 09:27:17.939: INFO: Restart count of pod e2e-tests-container-probe-f2l9g/liveness-http is now 1 (20.209166247s elapsed) Aug 23 09:27:35.970: INFO: Restart count of pod e2e-tests-container-probe-f2l9g/liveness-http is now 2 (38.240527229s elapsed) Aug 23 09:27:56.275: INFO: Restart count of pod e2e-tests-container-probe-f2l9g/liveness-http is now 3 (58.544765525s elapsed) Aug 23 09:28:14.339: INFO: Restart count of pod e2e-tests-container-probe-f2l9g/liveness-http is now 4 (1m16.609247792s elapsed) Aug 23 09:29:26.564: INFO: Restart count of pod e2e-tests-container-probe-f2l9g/liveness-http is now 5 (2m28.834201837s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 23 09:29:26.600: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-f2l9g" for this suite. Aug 23 09:29:32.619: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 23 09:29:32.636: INFO: namespace: e2e-tests-container-probe-f2l9g, resource: bindings, ignored listing per whitelist Aug 23 09:29:32.674: INFO: namespace e2e-tests-container-probe-f2l9g deletion completed in 6.070173519s • [SLOW TEST:159.089 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should have monotonically increasing restart count [Slow][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 23 09:29:32.674: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-26eb9bd4-e523-11ea-87d5-0242ac11000a STEP: Creating a pod to test consume configMaps Aug 23 09:29:32.794: INFO: Waiting up to 5m0s for pod "pod-configmaps-26ec9692-e523-11ea-87d5-0242ac11000a" in namespace "e2e-tests-configmap-8mdpk" to be "success or failure" Aug 23 09:29:32.799: INFO: Pod "pod-configmaps-26ec9692-e523-11ea-87d5-0242ac11000a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.974466ms Aug 23 09:29:34.840: INFO: Pod "pod-configmaps-26ec9692-e523-11ea-87d5-0242ac11000a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.046027687s Aug 23 09:29:36.844: INFO: Pod "pod-configmaps-26ec9692-e523-11ea-87d5-0242ac11000a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.049645568s STEP: Saw pod success Aug 23 09:29:36.844: INFO: Pod "pod-configmaps-26ec9692-e523-11ea-87d5-0242ac11000a" satisfied condition "success or failure" Aug 23 09:29:36.847: INFO: Trying to get logs from node hunter-worker pod pod-configmaps-26ec9692-e523-11ea-87d5-0242ac11000a container configmap-volume-test: STEP: delete the pod Aug 23 09:29:36.873: INFO: Waiting for pod pod-configmaps-26ec9692-e523-11ea-87d5-0242ac11000a to disappear Aug 23 09:29:36.883: INFO: Pod pod-configmaps-26ec9692-e523-11ea-87d5-0242ac11000a no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 23 09:29:36.883: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-8mdpk" for this suite. Aug 23 09:29:43.002: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 23 09:29:43.015: INFO: namespace: e2e-tests-configmap-8mdpk, resource: bindings, ignored listing per whitelist Aug 23 09:29:43.071: INFO: namespace e2e-tests-configmap-8mdpk deletion completed in 6.152251728s • [SLOW TEST:10.397 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 23 09:29:43.071: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-upd-2d2a648f-e523-11ea-87d5-0242ac11000a STEP: Creating the pod STEP: Updating configmap configmap-test-upd-2d2a648f-e523-11ea-87d5-0242ac11000a STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 23 09:29:49.410: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-vqdcl" for this suite. Aug 23 09:30:11.436: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 23 09:30:11.482: INFO: namespace: e2e-tests-configmap-vqdcl, resource: bindings, ignored listing per whitelist Aug 23 09:30:11.506: INFO: namespace e2e-tests-configmap-vqdcl deletion completed in 22.091707215s • [SLOW TEST:28.435 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 23 09:30:11.506: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test env composition Aug 23 09:30:12.527: INFO: Waiting up to 5m0s for pod "var-expansion-3e72bf12-e523-11ea-87d5-0242ac11000a" in namespace "e2e-tests-var-expansion-pj7td" to be "success or failure" Aug 23 09:30:12.607: INFO: Pod "var-expansion-3e72bf12-e523-11ea-87d5-0242ac11000a": Phase="Pending", Reason="", readiness=false. Elapsed: 80.142378ms Aug 23 09:30:14.666: INFO: Pod "var-expansion-3e72bf12-e523-11ea-87d5-0242ac11000a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.139377407s Aug 23 09:30:16.670: INFO: Pod "var-expansion-3e72bf12-e523-11ea-87d5-0242ac11000a": Phase="Running", Reason="", readiness=true. Elapsed: 4.143285579s Aug 23 09:30:18.673: INFO: Pod "var-expansion-3e72bf12-e523-11ea-87d5-0242ac11000a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.146585271s STEP: Saw pod success Aug 23 09:30:18.673: INFO: Pod "var-expansion-3e72bf12-e523-11ea-87d5-0242ac11000a" satisfied condition "success or failure" Aug 23 09:30:18.676: INFO: Trying to get logs from node hunter-worker pod var-expansion-3e72bf12-e523-11ea-87d5-0242ac11000a container dapi-container: STEP: delete the pod Aug 23 09:30:18.714: INFO: Waiting for pod var-expansion-3e72bf12-e523-11ea-87d5-0242ac11000a to disappear Aug 23 09:30:18.750: INFO: Pod var-expansion-3e72bf12-e523-11ea-87d5-0242ac11000a no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 23 09:30:18.750: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-var-expansion-pj7td" for this suite. Aug 23 09:30:24.790: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 23 09:30:24.855: INFO: namespace: e2e-tests-var-expansion-pj7td, resource: bindings, ignored listing per whitelist Aug 23 09:30:24.861: INFO: namespace e2e-tests-var-expansion-pj7td deletion completed in 6.10679099s • [SLOW TEST:13.355 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 23 09:30:24.862: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0777 on node default medium Aug 23 09:30:25.010: INFO: Waiting up to 5m0s for pod "pod-460a2455-e523-11ea-87d5-0242ac11000a" in namespace "e2e-tests-emptydir-2mc2c" to be "success or failure" Aug 23 09:30:25.014: INFO: Pod "pod-460a2455-e523-11ea-87d5-0242ac11000a": Phase="Pending", Reason="", readiness=false. Elapsed: 3.371737ms Aug 23 09:30:27.086: INFO: Pod "pod-460a2455-e523-11ea-87d5-0242ac11000a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.075953711s Aug 23 09:30:29.090: INFO: Pod "pod-460a2455-e523-11ea-87d5-0242ac11000a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.079685363s STEP: Saw pod success Aug 23 09:30:29.090: INFO: Pod "pod-460a2455-e523-11ea-87d5-0242ac11000a" satisfied condition "success or failure" Aug 23 09:30:29.092: INFO: Trying to get logs from node hunter-worker2 pod pod-460a2455-e523-11ea-87d5-0242ac11000a container test-container: STEP: delete the pod Aug 23 09:30:29.125: INFO: Waiting for pod pod-460a2455-e523-11ea-87d5-0242ac11000a to disappear Aug 23 09:30:29.199: INFO: Pod pod-460a2455-e523-11ea-87d5-0242ac11000a no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 23 09:30:29.199: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-2mc2c" for this suite. Aug 23 09:30:35.213: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 23 09:30:35.283: INFO: namespace: e2e-tests-emptydir-2mc2c, resource: bindings, ignored listing per whitelist Aug 23 09:30:35.290: INFO: namespace e2e-tests-emptydir-2mc2c deletion completed in 6.086450222s • [SLOW TEST:10.428 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 23 09:30:35.290: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test substitution in container's args Aug 23 09:30:35.390: INFO: Waiting up to 5m0s for pod "var-expansion-4c3c151f-e523-11ea-87d5-0242ac11000a" in namespace "e2e-tests-var-expansion-6vg4p" to be "success or failure" Aug 23 09:30:35.410: INFO: Pod "var-expansion-4c3c151f-e523-11ea-87d5-0242ac11000a": Phase="Pending", Reason="", readiness=false. Elapsed: 19.853224ms Aug 23 09:30:37.413: INFO: Pod "var-expansion-4c3c151f-e523-11ea-87d5-0242ac11000a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022843282s Aug 23 09:30:39.416: INFO: Pod "var-expansion-4c3c151f-e523-11ea-87d5-0242ac11000a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.025827468s STEP: Saw pod success Aug 23 09:30:39.416: INFO: Pod "var-expansion-4c3c151f-e523-11ea-87d5-0242ac11000a" satisfied condition "success or failure" Aug 23 09:30:39.418: INFO: Trying to get logs from node hunter-worker pod var-expansion-4c3c151f-e523-11ea-87d5-0242ac11000a container dapi-container: STEP: delete the pod Aug 23 09:30:39.438: INFO: Waiting for pod var-expansion-4c3c151f-e523-11ea-87d5-0242ac11000a to disappear Aug 23 09:30:39.461: INFO: Pod var-expansion-4c3c151f-e523-11ea-87d5-0242ac11000a no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 23 09:30:39.461: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-var-expansion-6vg4p" for this suite. Aug 23 09:30:45.556: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 23 09:30:45.584: INFO: namespace: e2e-tests-var-expansion-6vg4p, resource: bindings, ignored listing per whitelist Aug 23 09:30:45.732: INFO: namespace e2e-tests-var-expansion-6vg4p deletion completed in 6.268366734s • [SLOW TEST:10.442 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 23 09:30:45.732: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a service in the namespace STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there is no service in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 23 09:30:52.529: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-namespaces-vfb9h" for this suite. Aug 23 09:30:58.554: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 23 09:30:58.566: INFO: namespace: e2e-tests-namespaces-vfb9h, resource: bindings, ignored listing per whitelist Aug 23 09:30:58.614: INFO: namespace e2e-tests-namespaces-vfb9h deletion completed in 6.081263254s STEP: Destroying namespace "e2e-tests-nsdeletetest-t9h62" for this suite. Aug 23 09:30:58.616: INFO: Namespace e2e-tests-nsdeletetest-t9h62 was already deleted STEP: Destroying namespace "e2e-tests-nsdeletetest-9dqs9" for this suite. Aug 23 09:31:04.675: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 23 09:31:04.703: INFO: namespace: e2e-tests-nsdeletetest-9dqs9, resource: bindings, ignored listing per whitelist Aug 23 09:31:04.744: INFO: namespace e2e-tests-nsdeletetest-9dqs9 deletion completed in 6.12838415s • [SLOW TEST:19.012 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 23 09:31:04.744: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: validating cluster-info Aug 23 09:31:04.835: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config cluster-info' Aug 23 09:31:07.505: INFO: stderr: "" Aug 23 09:31:07.505: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:45087\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:45087/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 23 09:31:07.505: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-mqsz5" for this suite. Aug 23 09:31:17.527: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 23 09:31:17.594: INFO: namespace: e2e-tests-kubectl-mqsz5, resource: bindings, ignored listing per whitelist Aug 23 09:31:17.611: INFO: namespace e2e-tests-kubectl-mqsz5 deletion completed in 10.102055546s • [SLOW TEST:12.867 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl cluster-info /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 23 09:31:17.612: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod liveness-exec in namespace e2e-tests-container-probe-c5m2h Aug 23 09:31:21.730: INFO: Started pod liveness-exec in namespace e2e-tests-container-probe-c5m2h STEP: checking the pod's current state and verifying that restartCount is present Aug 23 09:31:21.732: INFO: Initial restart count of pod liveness-exec is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 23 09:35:22.399: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-c5m2h" for this suite. Aug 23 09:35:30.449: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 23 09:35:30.489: INFO: namespace: e2e-tests-container-probe-c5m2h, resource: bindings, ignored listing per whitelist Aug 23 09:35:30.505: INFO: namespace e2e-tests-container-probe-c5m2h deletion completed in 8.076289667s • [SLOW TEST:252.893 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 23 09:35:30.505: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-map-fcc67fd7-e523-11ea-87d5-0242ac11000a STEP: Creating a pod to test consume secrets Aug 23 09:35:31.865: INFO: Waiting up to 5m0s for pod "pod-secrets-fccf1f60-e523-11ea-87d5-0242ac11000a" in namespace "e2e-tests-secrets-lk8t5" to be "success or failure" Aug 23 09:35:32.311: INFO: Pod "pod-secrets-fccf1f60-e523-11ea-87d5-0242ac11000a": Phase="Pending", Reason="", readiness=false. Elapsed: 445.735078ms Aug 23 09:35:34.314: INFO: Pod "pod-secrets-fccf1f60-e523-11ea-87d5-0242ac11000a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.449395324s Aug 23 09:35:36.319: INFO: Pod "pod-secrets-fccf1f60-e523-11ea-87d5-0242ac11000a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.453503572s Aug 23 09:35:38.322: INFO: Pod "pod-secrets-fccf1f60-e523-11ea-87d5-0242ac11000a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.456932152s STEP: Saw pod success Aug 23 09:35:38.322: INFO: Pod "pod-secrets-fccf1f60-e523-11ea-87d5-0242ac11000a" satisfied condition "success or failure" Aug 23 09:35:38.324: INFO: Trying to get logs from node hunter-worker2 pod pod-secrets-fccf1f60-e523-11ea-87d5-0242ac11000a container secret-volume-test: STEP: delete the pod Aug 23 09:35:38.359: INFO: Waiting for pod pod-secrets-fccf1f60-e523-11ea-87d5-0242ac11000a to disappear Aug 23 09:35:38.375: INFO: Pod pod-secrets-fccf1f60-e523-11ea-87d5-0242ac11000a no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 23 09:35:38.375: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-lk8t5" for this suite. Aug 23 09:35:44.646: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 23 09:35:44.657: INFO: namespace: e2e-tests-secrets-lk8t5, resource: bindings, ignored listing per whitelist Aug 23 09:35:45.153: INFO: namespace e2e-tests-secrets-lk8t5 deletion completed in 6.774643324s • [SLOW TEST:14.647 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 23 09:35:45.153: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Aug 23 09:35:45.748: INFO: Waiting up to 5m0s for pod "downwardapi-volume-05391097-e524-11ea-87d5-0242ac11000a" in namespace "e2e-tests-projected-sst2w" to be "success or failure" Aug 23 09:35:45.831: INFO: Pod "downwardapi-volume-05391097-e524-11ea-87d5-0242ac11000a": Phase="Pending", Reason="", readiness=false. Elapsed: 83.021009ms Aug 23 09:35:47.834: INFO: Pod "downwardapi-volume-05391097-e524-11ea-87d5-0242ac11000a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.086171655s Aug 23 09:35:49.837: INFO: Pod "downwardapi-volume-05391097-e524-11ea-87d5-0242ac11000a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.089349964s Aug 23 09:35:51.841: INFO: Pod "downwardapi-volume-05391097-e524-11ea-87d5-0242ac11000a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.093575657s STEP: Saw pod success Aug 23 09:35:51.841: INFO: Pod "downwardapi-volume-05391097-e524-11ea-87d5-0242ac11000a" satisfied condition "success or failure" Aug 23 09:35:51.844: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-05391097-e524-11ea-87d5-0242ac11000a container client-container: STEP: delete the pod Aug 23 09:35:52.096: INFO: Waiting for pod downwardapi-volume-05391097-e524-11ea-87d5-0242ac11000a to disappear Aug 23 09:35:52.165: INFO: Pod downwardapi-volume-05391097-e524-11ea-87d5-0242ac11000a no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 23 09:35:52.165: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-sst2w" for this suite. Aug 23 09:35:58.238: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 23 09:35:58.300: INFO: namespace: e2e-tests-projected-sst2w, resource: bindings, ignored listing per whitelist Aug 23 09:35:58.310: INFO: namespace e2e-tests-projected-sst2w deletion completed in 6.140339747s • [SLOW TEST:13.157 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 23 09:35:58.310: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename prestop STEP: Waiting for a default service account to be provisioned in namespace [It] should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating server pod server in namespace e2e-tests-prestop-hxtvb STEP: Waiting for pods to come up. STEP: Creating tester pod tester in namespace e2e-tests-prestop-hxtvb STEP: Deleting pre-stop pod Aug 23 09:36:11.762: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": { "prestop": 1 }, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } STEP: Deleting the server pod [AfterEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 23 09:36:11.766: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-prestop-hxtvb" for this suite. Aug 23 09:36:52.134: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 23 09:36:52.143: INFO: namespace: e2e-tests-prestop-hxtvb, resource: bindings, ignored listing per whitelist Aug 23 09:36:52.185: INFO: namespace e2e-tests-prestop-hxtvb deletion completed in 40.298500771s • [SLOW TEST:53.875 seconds] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 23 09:36:52.185: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating the pod Aug 23 09:36:56.847: INFO: Successfully updated pod "labelsupdate2cdfd448-e524-11ea-87d5-0242ac11000a" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 23 09:36:58.879: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-pljpn" for this suite. Aug 23 09:37:22.901: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 23 09:37:22.933: INFO: namespace: e2e-tests-projected-pljpn, resource: bindings, ignored listing per whitelist Aug 23 09:37:22.974: INFO: namespace e2e-tests-projected-pljpn deletion completed in 24.082266478s • [SLOW TEST:30.789 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 23 09:37:22.974: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-7n454 STEP: creating a selector STEP: Creating the service pods in kubernetes Aug 23 09:37:23.246: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Aug 23 09:37:47.666: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.2.56:8080/hostName | grep -v '^\s*$'] Namespace:e2e-tests-pod-network-test-7n454 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 23 09:37:47.666: INFO: >>> kubeConfig: /root/.kube/config I0823 09:37:47.695080 6 log.go:172] (0xc000aa3080) (0xc002355860) Create stream I0823 09:37:47.695108 6 log.go:172] (0xc000aa3080) (0xc002355860) Stream added, broadcasting: 1 I0823 09:37:47.697535 6 log.go:172] (0xc000aa3080) Reply frame received for 1 I0823 09:37:47.697567 6 log.go:172] (0xc000aa3080) (0xc002355900) Create stream I0823 09:37:47.697579 6 log.go:172] (0xc000aa3080) (0xc002355900) Stream added, broadcasting: 3 I0823 09:37:47.698352 6 log.go:172] (0xc000aa3080) Reply frame received for 3 I0823 09:37:47.698380 6 log.go:172] (0xc000aa3080) (0xc0023559a0) Create stream I0823 09:37:47.698389 6 log.go:172] (0xc000aa3080) (0xc0023559a0) Stream added, broadcasting: 5 I0823 09:37:47.699119 6 log.go:172] (0xc000aa3080) Reply frame received for 5 I0823 09:37:47.781239 6 log.go:172] (0xc000aa3080) Data frame received for 3 I0823 09:37:47.781267 6 log.go:172] (0xc002355900) (3) Data frame handling I0823 09:37:47.781284 6 log.go:172] (0xc002355900) (3) Data frame sent I0823 09:37:47.781296 6 log.go:172] (0xc000aa3080) Data frame received for 3 I0823 09:37:47.781306 6 log.go:172] (0xc002355900) (3) Data frame handling I0823 09:37:47.781441 6 log.go:172] (0xc000aa3080) Data frame received for 5 I0823 09:37:47.781472 6 log.go:172] (0xc0023559a0) (5) Data frame handling I0823 09:37:47.782854 6 log.go:172] (0xc000aa3080) Data frame received for 1 I0823 09:37:47.782870 6 log.go:172] (0xc002355860) (1) Data frame handling I0823 09:37:47.782882 6 log.go:172] (0xc002355860) (1) Data frame sent I0823 09:37:47.783045 6 log.go:172] (0xc000aa3080) (0xc002355860) Stream removed, broadcasting: 1 I0823 09:37:47.783157 6 log.go:172] (0xc000aa3080) (0xc002355860) Stream removed, broadcasting: 1 I0823 09:37:47.783192 6 log.go:172] (0xc000aa3080) (0xc002355900) Stream removed, broadcasting: 3 I0823 09:37:47.783220 6 log.go:172] (0xc000aa3080) (0xc0023559a0) Stream removed, broadcasting: 5 Aug 23 09:37:47.783: INFO: Found all expected endpoints: [netserver-0] I0823 09:37:47.783527 6 log.go:172] (0xc000aa3080) Go away received Aug 23 09:37:47.785: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.1.52:8080/hostName | grep -v '^\s*$'] Namespace:e2e-tests-pod-network-test-7n454 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 23 09:37:47.785: INFO: >>> kubeConfig: /root/.kube/config I0823 09:37:47.813095 6 log.go:172] (0xc000aa3550) (0xc002355d60) Create stream I0823 09:37:47.813114 6 log.go:172] (0xc000aa3550) (0xc002355d60) Stream added, broadcasting: 1 I0823 09:37:47.815135 6 log.go:172] (0xc000aa3550) Reply frame received for 1 I0823 09:37:47.815173 6 log.go:172] (0xc000aa3550) (0xc0020d1360) Create stream I0823 09:37:47.815187 6 log.go:172] (0xc000aa3550) (0xc0020d1360) Stream added, broadcasting: 3 I0823 09:37:47.816028 6 log.go:172] (0xc000aa3550) Reply frame received for 3 I0823 09:37:47.816066 6 log.go:172] (0xc000aa3550) (0xc001032fa0) Create stream I0823 09:37:47.816076 6 log.go:172] (0xc000aa3550) (0xc001032fa0) Stream added, broadcasting: 5 I0823 09:37:47.816877 6 log.go:172] (0xc000aa3550) Reply frame received for 5 I0823 09:37:47.871814 6 log.go:172] (0xc000aa3550) Data frame received for 5 I0823 09:37:47.871884 6 log.go:172] (0xc001032fa0) (5) Data frame handling I0823 09:37:47.871914 6 log.go:172] (0xc000aa3550) Data frame received for 3 I0823 09:37:47.871926 6 log.go:172] (0xc0020d1360) (3) Data frame handling I0823 09:37:47.871941 6 log.go:172] (0xc0020d1360) (3) Data frame sent I0823 09:37:47.872282 6 log.go:172] (0xc000aa3550) Data frame received for 3 I0823 09:37:47.872298 6 log.go:172] (0xc0020d1360) (3) Data frame handling I0823 09:37:47.873521 6 log.go:172] (0xc000aa3550) Data frame received for 1 I0823 09:37:47.873583 6 log.go:172] (0xc002355d60) (1) Data frame handling I0823 09:37:47.873646 6 log.go:172] (0xc002355d60) (1) Data frame sent I0823 09:37:47.873675 6 log.go:172] (0xc000aa3550) (0xc002355d60) Stream removed, broadcasting: 1 I0823 09:37:47.873707 6 log.go:172] (0xc000aa3550) Go away received I0823 09:37:47.873761 6 log.go:172] (0xc000aa3550) (0xc002355d60) Stream removed, broadcasting: 1 I0823 09:37:47.873773 6 log.go:172] (0xc000aa3550) (0xc0020d1360) Stream removed, broadcasting: 3 I0823 09:37:47.873783 6 log.go:172] (0xc000aa3550) (0xc001032fa0) Stream removed, broadcasting: 5 Aug 23 09:37:47.873: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 23 09:37:47.873: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pod-network-test-7n454" for this suite. Aug 23 09:38:03.890: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 23 09:38:03.911: INFO: namespace: e2e-tests-pod-network-test-7n454, resource: bindings, ignored listing per whitelist Aug 23 09:38:03.953: INFO: namespace e2e-tests-pod-network-test-7n454 deletion completed in 16.076201389s • [SLOW TEST:40.979 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for node-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-network] Services should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 23 09:38:03.953: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:85 [It] should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating service endpoint-test2 in namespace e2e-tests-services-n2ks5 STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-n2ks5 to expose endpoints map[] Aug 23 09:38:04.133: INFO: Get endpoints failed (12.404987ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found Aug 23 09:38:05.136: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-n2ks5 exposes endpoints map[] (1.015868325s elapsed) STEP: Creating pod pod1 in namespace e2e-tests-services-n2ks5 STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-n2ks5 to expose endpoints map[pod1:[80]] Aug 23 09:38:09.194: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-n2ks5 exposes endpoints map[pod1:[80]] (4.052811822s elapsed) STEP: Creating pod pod2 in namespace e2e-tests-services-n2ks5 STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-n2ks5 to expose endpoints map[pod1:[80] pod2:[80]] Aug 23 09:38:12.315: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-n2ks5 exposes endpoints map[pod1:[80] pod2:[80]] (3.117770395s elapsed) STEP: Deleting pod pod1 in namespace e2e-tests-services-n2ks5 STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-n2ks5 to expose endpoints map[pod2:[80]] Aug 23 09:38:13.343: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-n2ks5 exposes endpoints map[pod2:[80]] (1.024672734s elapsed) STEP: Deleting pod pod2 in namespace e2e-tests-services-n2ks5 STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-n2ks5 to expose endpoints map[] Aug 23 09:38:14.365: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-n2ks5 exposes endpoints map[] (1.018296581s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 23 09:38:14.616: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-services-n2ks5" for this suite. Aug 23 09:38:20.878: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 23 09:38:20.897: INFO: namespace: e2e-tests-services-n2ks5, resource: bindings, ignored listing per whitelist Aug 23 09:38:20.955: INFO: namespace e2e-tests-services-n2ks5 deletion completed in 6.217631369s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:90 • [SLOW TEST:17.002 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 23 09:38:20.956: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74 STEP: Creating service test in namespace e2e-tests-statefulset-krdhq [It] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Looking for a node to schedule stateful set and pod STEP: Creating pod with conflicting port in namespace e2e-tests-statefulset-krdhq STEP: Creating statefulset with conflicting port in namespace e2e-tests-statefulset-krdhq STEP: Waiting until pod test-pod will start running in namespace e2e-tests-statefulset-krdhq STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace e2e-tests-statefulset-krdhq Aug 23 09:38:27.122: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-krdhq, name: ss-0, uid: 6550568c-e524-11ea-a485-0242ac120004, status phase: Pending. Waiting for statefulset controller to delete. Aug 23 09:38:27.767: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-krdhq, name: ss-0, uid: 6550568c-e524-11ea-a485-0242ac120004, status phase: Failed. Waiting for statefulset controller to delete. Aug 23 09:38:27.772: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-krdhq, name: ss-0, uid: 6550568c-e524-11ea-a485-0242ac120004, status phase: Failed. Waiting for statefulset controller to delete. Aug 23 09:38:27.850: INFO: Observed delete event for stateful pod ss-0 in namespace e2e-tests-statefulset-krdhq STEP: Removing pod with conflicting port in namespace e2e-tests-statefulset-krdhq STEP: Waiting when stateful pod ss-0 will be recreated in namespace e2e-tests-statefulset-krdhq and will be in running state [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85 Aug 23 09:38:34.340: INFO: Deleting all statefulset in ns e2e-tests-statefulset-krdhq Aug 23 09:38:34.342: INFO: Scaling statefulset ss to 0 Aug 23 09:38:54.362: INFO: Waiting for statefulset status.replicas updated to 0 Aug 23 09:38:54.365: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 23 09:38:54.434: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-statefulset-krdhq" for this suite. Aug 23 09:39:02.461: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 23 09:39:02.479: INFO: namespace: e2e-tests-statefulset-krdhq, resource: bindings, ignored listing per whitelist Aug 23 09:39:02.591: INFO: namespace e2e-tests-statefulset-krdhq deletion completed in 8.144466408s • [SLOW TEST:41.636 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 23 09:39:02.591: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74 STEP: Creating service test in namespace e2e-tests-statefulset-6mcvq [It] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a new StatefulSet Aug 23 09:39:03.755: INFO: Found 0 stateful pods, waiting for 3 Aug 23 09:39:13.761: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Aug 23 09:39:13.761: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Aug 23 09:39:13.761: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false Aug 23 09:39:23.761: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Aug 23 09:39:23.761: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Aug 23 09:39:23.761: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true Aug 23 09:39:23.770: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6mcvq ss2-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Aug 23 09:39:24.049: INFO: stderr: "I0823 09:39:23.904287 789 log.go:172] (0xc000138840) (0xc00075e640) Create stream\nI0823 09:39:23.904345 789 log.go:172] (0xc000138840) (0xc00075e640) Stream added, broadcasting: 1\nI0823 09:39:23.906773 789 log.go:172] (0xc000138840) Reply frame received for 1\nI0823 09:39:23.906832 789 log.go:172] (0xc000138840) (0xc000126c80) Create stream\nI0823 09:39:23.906851 789 log.go:172] (0xc000138840) (0xc000126c80) Stream added, broadcasting: 3\nI0823 09:39:23.908021 789 log.go:172] (0xc000138840) Reply frame received for 3\nI0823 09:39:23.908071 789 log.go:172] (0xc000138840) (0xc000572000) Create stream\nI0823 09:39:23.908084 789 log.go:172] (0xc000138840) (0xc000572000) Stream added, broadcasting: 5\nI0823 09:39:23.909306 789 log.go:172] (0xc000138840) Reply frame received for 5\nI0823 09:39:24.039978 789 log.go:172] (0xc000138840) Data frame received for 5\nI0823 09:39:24.040013 789 log.go:172] (0xc000572000) (5) Data frame handling\nI0823 09:39:24.040050 789 log.go:172] (0xc000138840) Data frame received for 3\nI0823 09:39:24.040088 789 log.go:172] (0xc000126c80) (3) Data frame handling\nI0823 09:39:24.040119 789 log.go:172] (0xc000126c80) (3) Data frame sent\nI0823 09:39:24.041075 789 log.go:172] (0xc000138840) Data frame received for 3\nI0823 09:39:24.041100 789 log.go:172] (0xc000126c80) (3) Data frame handling\nI0823 09:39:24.042375 789 log.go:172] (0xc000138840) Data frame received for 1\nI0823 09:39:24.042407 789 log.go:172] (0xc00075e640) (1) Data frame handling\nI0823 09:39:24.042438 789 log.go:172] (0xc00075e640) (1) Data frame sent\nI0823 09:39:24.042456 789 log.go:172] (0xc000138840) (0xc00075e640) Stream removed, broadcasting: 1\nI0823 09:39:24.042478 789 log.go:172] (0xc000138840) Go away received\nI0823 09:39:24.042854 789 log.go:172] (0xc000138840) (0xc00075e640) Stream removed, broadcasting: 1\nI0823 09:39:24.042892 789 log.go:172] (0xc000138840) (0xc000126c80) Stream removed, broadcasting: 3\nI0823 09:39:24.042918 789 log.go:172] (0xc000138840) (0xc000572000) Stream removed, broadcasting: 5\n" Aug 23 09:39:24.049: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Aug 23 09:39:24.049: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' STEP: Updating StatefulSet template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine Aug 23 09:39:34.081: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Updating Pods in reverse ordinal order Aug 23 09:39:44.123: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6mcvq ss2-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Aug 23 09:39:44.318: INFO: stderr: "I0823 09:39:44.244035 811 log.go:172] (0xc0001306e0) (0xc000768640) Create stream\nI0823 09:39:44.244089 811 log.go:172] (0xc0001306e0) (0xc000768640) Stream added, broadcasting: 1\nI0823 09:39:44.246613 811 log.go:172] (0xc0001306e0) Reply frame received for 1\nI0823 09:39:44.246674 811 log.go:172] (0xc0001306e0) (0xc00069ac80) Create stream\nI0823 09:39:44.246690 811 log.go:172] (0xc0001306e0) (0xc00069ac80) Stream added, broadcasting: 3\nI0823 09:39:44.247700 811 log.go:172] (0xc0001306e0) Reply frame received for 3\nI0823 09:39:44.247727 811 log.go:172] (0xc0001306e0) (0xc0007686e0) Create stream\nI0823 09:39:44.247734 811 log.go:172] (0xc0001306e0) (0xc0007686e0) Stream added, broadcasting: 5\nI0823 09:39:44.248680 811 log.go:172] (0xc0001306e0) Reply frame received for 5\nI0823 09:39:44.311884 811 log.go:172] (0xc0001306e0) Data frame received for 5\nI0823 09:39:44.311923 811 log.go:172] (0xc0007686e0) (5) Data frame handling\nI0823 09:39:44.311946 811 log.go:172] (0xc0001306e0) Data frame received for 3\nI0823 09:39:44.311954 811 log.go:172] (0xc00069ac80) (3) Data frame handling\nI0823 09:39:44.311963 811 log.go:172] (0xc00069ac80) (3) Data frame sent\nI0823 09:39:44.311972 811 log.go:172] (0xc0001306e0) Data frame received for 3\nI0823 09:39:44.311980 811 log.go:172] (0xc00069ac80) (3) Data frame handling\nI0823 09:39:44.313376 811 log.go:172] (0xc0001306e0) Data frame received for 1\nI0823 09:39:44.313399 811 log.go:172] (0xc000768640) (1) Data frame handling\nI0823 09:39:44.313409 811 log.go:172] (0xc000768640) (1) Data frame sent\nI0823 09:39:44.313419 811 log.go:172] (0xc0001306e0) (0xc000768640) Stream removed, broadcasting: 1\nI0823 09:39:44.313430 811 log.go:172] (0xc0001306e0) Go away received\nI0823 09:39:44.313623 811 log.go:172] (0xc0001306e0) (0xc000768640) Stream removed, broadcasting: 1\nI0823 09:39:44.313650 811 log.go:172] (0xc0001306e0) (0xc00069ac80) Stream removed, broadcasting: 3\nI0823 09:39:44.313662 811 log.go:172] (0xc0001306e0) (0xc0007686e0) Stream removed, broadcasting: 5\n" Aug 23 09:39:44.318: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Aug 23 09:39:44.318: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Aug 23 09:39:54.335: INFO: Waiting for StatefulSet e2e-tests-statefulset-6mcvq/ss2 to complete update Aug 23 09:39:54.335: INFO: Waiting for Pod e2e-tests-statefulset-6mcvq/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Aug 23 09:39:54.335: INFO: Waiting for Pod e2e-tests-statefulset-6mcvq/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Aug 23 09:39:54.335: INFO: Waiting for Pod e2e-tests-statefulset-6mcvq/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Aug 23 09:40:04.342: INFO: Waiting for StatefulSet e2e-tests-statefulset-6mcvq/ss2 to complete update Aug 23 09:40:04.342: INFO: Waiting for Pod e2e-tests-statefulset-6mcvq/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Aug 23 09:40:04.342: INFO: Waiting for Pod e2e-tests-statefulset-6mcvq/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Aug 23 09:40:14.342: INFO: Waiting for StatefulSet e2e-tests-statefulset-6mcvq/ss2 to complete update Aug 23 09:40:14.342: INFO: Waiting for Pod e2e-tests-statefulset-6mcvq/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c STEP: Rolling back to a previous revision Aug 23 09:40:24.449: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6mcvq ss2-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Aug 23 09:40:24.696: INFO: stderr: "I0823 09:40:24.567429 834 log.go:172] (0xc0008142c0) (0xc0006fc640) Create stream\nI0823 09:40:24.567479 834 log.go:172] (0xc0008142c0) (0xc0006fc640) Stream added, broadcasting: 1\nI0823 09:40:24.569321 834 log.go:172] (0xc0008142c0) Reply frame received for 1\nI0823 09:40:24.569357 834 log.go:172] (0xc0008142c0) (0xc000676d20) Create stream\nI0823 09:40:24.569366 834 log.go:172] (0xc0008142c0) (0xc000676d20) Stream added, broadcasting: 3\nI0823 09:40:24.570187 834 log.go:172] (0xc0008142c0) Reply frame received for 3\nI0823 09:40:24.570218 834 log.go:172] (0xc0008142c0) (0xc0006fc6e0) Create stream\nI0823 09:40:24.570225 834 log.go:172] (0xc0008142c0) (0xc0006fc6e0) Stream added, broadcasting: 5\nI0823 09:40:24.571129 834 log.go:172] (0xc0008142c0) Reply frame received for 5\nI0823 09:40:24.688566 834 log.go:172] (0xc0008142c0) Data frame received for 5\nI0823 09:40:24.688635 834 log.go:172] (0xc0006fc6e0) (5) Data frame handling\nI0823 09:40:24.688675 834 log.go:172] (0xc0008142c0) Data frame received for 3\nI0823 09:40:24.688708 834 log.go:172] (0xc000676d20) (3) Data frame handling\nI0823 09:40:24.688839 834 log.go:172] (0xc000676d20) (3) Data frame sent\nI0823 09:40:24.688865 834 log.go:172] (0xc0008142c0) Data frame received for 3\nI0823 09:40:24.688879 834 log.go:172] (0xc000676d20) (3) Data frame handling\nI0823 09:40:24.690663 834 log.go:172] (0xc0008142c0) Data frame received for 1\nI0823 09:40:24.690685 834 log.go:172] (0xc0006fc640) (1) Data frame handling\nI0823 09:40:24.690695 834 log.go:172] (0xc0006fc640) (1) Data frame sent\nI0823 09:40:24.690707 834 log.go:172] (0xc0008142c0) (0xc0006fc640) Stream removed, broadcasting: 1\nI0823 09:40:24.690826 834 log.go:172] (0xc0008142c0) Go away received\nI0823 09:40:24.690869 834 log.go:172] (0xc0008142c0) (0xc0006fc640) Stream removed, broadcasting: 1\nI0823 09:40:24.690883 834 log.go:172] (0xc0008142c0) (0xc000676d20) Stream removed, broadcasting: 3\nI0823 09:40:24.690896 834 log.go:172] (0xc0008142c0) (0xc0006fc6e0) Stream removed, broadcasting: 5\n" Aug 23 09:40:24.696: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Aug 23 09:40:24.696: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Aug 23 09:40:34.729: INFO: Updating stateful set ss2 STEP: Rolling back update in reverse ordinal order Aug 23 09:40:44.754: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6mcvq ss2-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Aug 23 09:40:44.992: INFO: stderr: "I0823 09:40:44.887804 856 log.go:172] (0xc0008602c0) (0xc00057b4a0) Create stream\nI0823 09:40:44.887886 856 log.go:172] (0xc0008602c0) (0xc00057b4a0) Stream added, broadcasting: 1\nI0823 09:40:44.890593 856 log.go:172] (0xc0008602c0) Reply frame received for 1\nI0823 09:40:44.890653 856 log.go:172] (0xc0008602c0) (0xc000560000) Create stream\nI0823 09:40:44.890672 856 log.go:172] (0xc0008602c0) (0xc000560000) Stream added, broadcasting: 3\nI0823 09:40:44.891579 856 log.go:172] (0xc0008602c0) Reply frame received for 3\nI0823 09:40:44.891615 856 log.go:172] (0xc0008602c0) (0xc0005600a0) Create stream\nI0823 09:40:44.891626 856 log.go:172] (0xc0008602c0) (0xc0005600a0) Stream added, broadcasting: 5\nI0823 09:40:44.892643 856 log.go:172] (0xc0008602c0) Reply frame received for 5\nI0823 09:40:44.985143 856 log.go:172] (0xc0008602c0) Data frame received for 5\nI0823 09:40:44.985177 856 log.go:172] (0xc0005600a0) (5) Data frame handling\nI0823 09:40:44.985201 856 log.go:172] (0xc0008602c0) Data frame received for 3\nI0823 09:40:44.985218 856 log.go:172] (0xc000560000) (3) Data frame handling\nI0823 09:40:44.985237 856 log.go:172] (0xc000560000) (3) Data frame sent\nI0823 09:40:44.985247 856 log.go:172] (0xc0008602c0) Data frame received for 3\nI0823 09:40:44.985254 856 log.go:172] (0xc000560000) (3) Data frame handling\nI0823 09:40:44.986333 856 log.go:172] (0xc0008602c0) Data frame received for 1\nI0823 09:40:44.986356 856 log.go:172] (0xc00057b4a0) (1) Data frame handling\nI0823 09:40:44.986366 856 log.go:172] (0xc00057b4a0) (1) Data frame sent\nI0823 09:40:44.986389 856 log.go:172] (0xc0008602c0) (0xc00057b4a0) Stream removed, broadcasting: 1\nI0823 09:40:44.986410 856 log.go:172] (0xc0008602c0) Go away received\nI0823 09:40:44.986559 856 log.go:172] (0xc0008602c0) (0xc00057b4a0) Stream removed, broadcasting: 1\nI0823 09:40:44.986576 856 log.go:172] (0xc0008602c0) (0xc000560000) Stream removed, broadcasting: 3\nI0823 09:40:44.986585 856 log.go:172] (0xc0008602c0) (0xc0005600a0) Stream removed, broadcasting: 5\n" Aug 23 09:40:44.992: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Aug 23 09:40:44.992: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Aug 23 09:40:55.011: INFO: Waiting for StatefulSet e2e-tests-statefulset-6mcvq/ss2 to complete update Aug 23 09:40:55.011: INFO: Waiting for Pod e2e-tests-statefulset-6mcvq/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Aug 23 09:40:55.011: INFO: Waiting for Pod e2e-tests-statefulset-6mcvq/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Aug 23 09:40:55.011: INFO: Waiting for Pod e2e-tests-statefulset-6mcvq/ss2-2 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Aug 23 09:41:05.017: INFO: Waiting for StatefulSet e2e-tests-statefulset-6mcvq/ss2 to complete update Aug 23 09:41:05.017: INFO: Waiting for Pod e2e-tests-statefulset-6mcvq/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Aug 23 09:41:05.017: INFO: Waiting for Pod e2e-tests-statefulset-6mcvq/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Aug 23 09:41:15.138: INFO: Waiting for StatefulSet e2e-tests-statefulset-6mcvq/ss2 to complete update Aug 23 09:41:15.138: INFO: Waiting for Pod e2e-tests-statefulset-6mcvq/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Aug 23 09:41:25.016: INFO: Waiting for StatefulSet e2e-tests-statefulset-6mcvq/ss2 to complete update [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85 Aug 23 09:41:35.144: INFO: Deleting all statefulset in ns e2e-tests-statefulset-6mcvq Aug 23 09:41:35.147: INFO: Scaling statefulset ss2 to 0 Aug 23 09:42:15.191: INFO: Waiting for statefulset status.replicas updated to 0 Aug 23 09:42:15.193: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 23 09:42:15.232: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-statefulset-6mcvq" for this suite. Aug 23 09:42:27.365: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 23 09:42:27.405: INFO: namespace: e2e-tests-statefulset-6mcvq, resource: bindings, ignored listing per whitelist Aug 23 09:42:27.434: INFO: namespace e2e-tests-statefulset-6mcvq deletion completed in 12.131893318s • [SLOW TEST:204.843 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 23 09:42:27.435: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0666 on node default medium Aug 23 09:42:27.673: INFO: Waiting up to 5m0s for pod "pod-f4c875e6-e524-11ea-87d5-0242ac11000a" in namespace "e2e-tests-emptydir-7vm4g" to be "success or failure" Aug 23 09:42:27.677: INFO: Pod "pod-f4c875e6-e524-11ea-87d5-0242ac11000a": Phase="Pending", Reason="", readiness=false. Elapsed: 3.658569ms Aug 23 09:42:30.213: INFO: Pod "pod-f4c875e6-e524-11ea-87d5-0242ac11000a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.539973212s Aug 23 09:42:32.216: INFO: Pod "pod-f4c875e6-e524-11ea-87d5-0242ac11000a": Phase="Running", Reason="", readiness=true. Elapsed: 4.543312619s Aug 23 09:42:34.220: INFO: Pod "pod-f4c875e6-e524-11ea-87d5-0242ac11000a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.546750238s STEP: Saw pod success Aug 23 09:42:34.220: INFO: Pod "pod-f4c875e6-e524-11ea-87d5-0242ac11000a" satisfied condition "success or failure" Aug 23 09:42:34.222: INFO: Trying to get logs from node hunter-worker pod pod-f4c875e6-e524-11ea-87d5-0242ac11000a container test-container: STEP: delete the pod Aug 23 09:42:34.334: INFO: Waiting for pod pod-f4c875e6-e524-11ea-87d5-0242ac11000a to disappear Aug 23 09:42:34.390: INFO: Pod pod-f4c875e6-e524-11ea-87d5-0242ac11000a no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 23 09:42:34.390: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-7vm4g" for this suite. Aug 23 09:42:44.630: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 23 09:42:44.671: INFO: namespace: e2e-tests-emptydir-7vm4g, resource: bindings, ignored listing per whitelist Aug 23 09:42:44.705: INFO: namespace e2e-tests-emptydir-7vm4g deletion completed in 10.311174765s • [SLOW TEST:17.270 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 23 09:42:44.705: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65 [It] RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Aug 23 09:42:44.823: INFO: Creating deployment "test-recreate-deployment" Aug 23 09:42:44.865: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1 Aug 23 09:42:44.880: INFO: new replicaset for deployment "test-recreate-deployment" is yet to be created Aug 23 09:42:46.887: INFO: Waiting deployment "test-recreate-deployment" to complete Aug 23 09:42:46.890: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733772564, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733772564, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733772565, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733772564, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 23 09:42:48.894: INFO: Triggering a new rollout for deployment "test-recreate-deployment" Aug 23 09:42:48.902: INFO: Updating deployment test-recreate-deployment Aug 23 09:42:48.902: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59 Aug 23 09:42:49.554: INFO: Deployment "test-recreate-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment,GenerateName:,Namespace:e2e-tests-deployment-8bxvx,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-8bxvx/deployments/test-recreate-deployment,UID:ff03c852-e524-11ea-a485-0242ac120004,ResourceVersion:1680832,Generation:2,CreationTimestamp:2020-08-23 09:42:44 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[{Available False 2020-08-23 09:42:49 +0000 UTC 2020-08-23 09:42:49 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2020-08-23 09:42:49 +0000 UTC 2020-08-23 09:42:44 +0000 UTC ReplicaSetUpdated ReplicaSet "test-recreate-deployment-589c4bfd" is progressing.}],ReadyReplicas:0,CollisionCount:nil,},} Aug 23 09:42:49.560: INFO: New ReplicaSet "test-recreate-deployment-589c4bfd" of Deployment "test-recreate-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-589c4bfd,GenerateName:,Namespace:e2e-tests-deployment-8bxvx,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-8bxvx/replicasets/test-recreate-deployment-589c4bfd,UID:0181b819-e525-11ea-a485-0242ac120004,ResourceVersion:1680830,Generation:1,CreationTimestamp:2020-08-23 09:42:49 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment ff03c852-e524-11ea-a485-0242ac120004 0xc0010e247f 0xc0010e25d0}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Aug 23 09:42:49.560: INFO: All old ReplicaSets of Deployment "test-recreate-deployment": Aug 23 09:42:49.561: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5bf7f65dc,GenerateName:,Namespace:e2e-tests-deployment-8bxvx,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-8bxvx/replicasets/test-recreate-deployment-5bf7f65dc,UID:ff0c4a9f-e524-11ea-a485-0242ac120004,ResourceVersion:1680821,Generation:2,CreationTimestamp:2020-08-23 09:42:44 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5bf7f65dc,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment ff03c852-e524-11ea-a485-0242ac120004 0xc0010e2690 0xc0010e2691}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5bf7f65dc,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5bf7f65dc,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Aug 23 09:42:49.576: INFO: Pod "test-recreate-deployment-589c4bfd-pknf5" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-589c4bfd-pknf5,GenerateName:test-recreate-deployment-589c4bfd-,Namespace:e2e-tests-deployment-8bxvx,SelfLink:/api/v1/namespaces/e2e-tests-deployment-8bxvx/pods/test-recreate-deployment-589c4bfd-pknf5,UID:0183863d-e525-11ea-a485-0242ac120004,ResourceVersion:1680833,Generation:0,CreationTimestamp:2020-08-23 09:42:49 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-recreate-deployment-589c4bfd 0181b819-e525-11ea-a485-0242ac120004 0xc0010e2faf 0xc0010e2fc0}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-v8tln {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-v8tln,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-v8tln true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0010e3030} {node.kubernetes.io/unreachable Exists NoExecute 0xc0010e3050}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-23 09:42:49 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-23 09:42:49 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-23 09:42:49 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-23 09:42:49 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.2,PodIP:,StartTime:2020-08-23 09:42:49 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 23 09:42:49.576: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-deployment-8bxvx" for this suite. Aug 23 09:42:57.627: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 23 09:42:57.676: INFO: namespace: e2e-tests-deployment-8bxvx, resource: bindings, ignored listing per whitelist Aug 23 09:42:57.708: INFO: namespace e2e-tests-deployment-8bxvx deletion completed in 8.098804414s • [SLOW TEST:13.003 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl api-versions should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 23 09:42:57.708: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: validating api versions Aug 23 09:42:57.812: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config api-versions' Aug 23 09:42:58.027: INFO: stderr: "" Aug 23 09:42:58.027: INFO: stdout: "admissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\napps/v1beta1\napps/v1beta2\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 23 09:42:58.028: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-l8d5t" for this suite. Aug 23 09:43:04.053: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 23 09:43:04.121: INFO: namespace: e2e-tests-kubectl-l8d5t, resource: bindings, ignored listing per whitelist Aug 23 09:43:04.134: INFO: namespace e2e-tests-kubectl-l8d5t deletion completed in 6.102548557s • [SLOW TEST:6.426 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl api-versions /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 23 09:43:04.135: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43 [It] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod Aug 23 09:43:04.277: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 23 09:43:12.719: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-init-container-nbs8p" for this suite. Aug 23 09:43:36.740: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 23 09:43:36.783: INFO: namespace: e2e-tests-init-container-nbs8p, resource: bindings, ignored listing per whitelist Aug 23 09:43:36.836: INFO: namespace e2e-tests-init-container-nbs8p deletion completed in 24.10604011s • [SLOW TEST:32.701 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 23 09:43:36.836: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on tmpfs should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir volume type on tmpfs Aug 23 09:43:37.049: INFO: Waiting up to 5m0s for pod "pod-1e2230aa-e525-11ea-87d5-0242ac11000a" in namespace "e2e-tests-emptydir-glv4l" to be "success or failure" Aug 23 09:43:37.053: INFO: Pod "pod-1e2230aa-e525-11ea-87d5-0242ac11000a": Phase="Pending", Reason="", readiness=false. Elapsed: 3.396327ms Aug 23 09:43:39.137: INFO: Pod "pod-1e2230aa-e525-11ea-87d5-0242ac11000a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.088278764s Aug 23 09:43:41.163: INFO: Pod "pod-1e2230aa-e525-11ea-87d5-0242ac11000a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.113727167s STEP: Saw pod success Aug 23 09:43:41.163: INFO: Pod "pod-1e2230aa-e525-11ea-87d5-0242ac11000a" satisfied condition "success or failure" Aug 23 09:43:41.165: INFO: Trying to get logs from node hunter-worker pod pod-1e2230aa-e525-11ea-87d5-0242ac11000a container test-container: STEP: delete the pod Aug 23 09:43:41.233: INFO: Waiting for pod pod-1e2230aa-e525-11ea-87d5-0242ac11000a to disappear Aug 23 09:43:41.286: INFO: Pod pod-1e2230aa-e525-11ea-87d5-0242ac11000a no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 23 09:43:41.286: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-glv4l" for this suite. Aug 23 09:43:47.314: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 23 09:43:47.368: INFO: namespace: e2e-tests-emptydir-glv4l, resource: bindings, ignored listing per whitelist Aug 23 09:43:47.376: INFO: namespace e2e-tests-emptydir-glv4l deletion completed in 6.086100074s • [SLOW TEST:10.540 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 volume on tmpfs should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 23 09:43:47.377: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Aug 23 09:43:47.455: INFO: Creating ReplicaSet my-hostname-basic-24588f97-e525-11ea-87d5-0242ac11000a Aug 23 09:43:47.475: INFO: Pod name my-hostname-basic-24588f97-e525-11ea-87d5-0242ac11000a: Found 0 pods out of 1 Aug 23 09:43:52.480: INFO: Pod name my-hostname-basic-24588f97-e525-11ea-87d5-0242ac11000a: Found 1 pods out of 1 Aug 23 09:43:52.480: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-24588f97-e525-11ea-87d5-0242ac11000a" is running Aug 23 09:43:52.482: INFO: Pod "my-hostname-basic-24588f97-e525-11ea-87d5-0242ac11000a-c8xl9" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-08-23 09:43:47 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-08-23 09:43:50 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-08-23 09:43:50 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-08-23 09:43:47 +0000 UTC Reason: Message:}]) Aug 23 09:43:52.483: INFO: Trying to dial the pod Aug 23 09:43:57.493: INFO: Controller my-hostname-basic-24588f97-e525-11ea-87d5-0242ac11000a: Got expected result from replica 1 [my-hostname-basic-24588f97-e525-11ea-87d5-0242ac11000a-c8xl9]: "my-hostname-basic-24588f97-e525-11ea-87d5-0242ac11000a-c8xl9", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 23 09:43:57.493: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-replicaset-r7w4z" for this suite. Aug 23 09:44:07.958: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 23 09:44:08.657: INFO: namespace: e2e-tests-replicaset-r7w4z, resource: bindings, ignored listing per whitelist Aug 23 09:44:08.675: INFO: namespace e2e-tests-replicaset-r7w4z deletion completed in 11.179340641s • [SLOW TEST:21.299 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 23 09:44:08.676: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Aug 23 09:44:22.894: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Aug 23 09:44:22.909: INFO: Pod pod-with-poststart-http-hook still exists Aug 23 09:44:24.909: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Aug 23 09:44:24.914: INFO: Pod pod-with-poststart-http-hook still exists Aug 23 09:44:26.909: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Aug 23 09:44:27.070: INFO: Pod pod-with-poststart-http-hook still exists Aug 23 09:44:28.909: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Aug 23 09:44:28.913: INFO: Pod pod-with-poststart-http-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 23 09:44:28.913: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-5rlv7" for this suite. Aug 23 09:44:50.977: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 23 09:44:51.029: INFO: namespace: e2e-tests-container-lifecycle-hook-5rlv7, resource: bindings, ignored listing per whitelist Aug 23 09:44:51.052: INFO: namespace e2e-tests-container-lifecycle-hook-5rlv7 deletion completed in 22.136546878s • [SLOW TEST:42.377 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40 should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-apps] ReplicationController should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 23 09:44:51.052: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Given a ReplicationController is created STEP: When the matched label of one of its pods change Aug 23 09:44:51.215: INFO: Pod name pod-release: Found 0 pods out of 1 Aug 23 09:44:56.218: INFO: Pod name pod-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 23 09:44:56.240: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-replication-controller-6dg6n" for this suite. Aug 23 09:45:04.370: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 23 09:45:04.398: INFO: namespace: e2e-tests-replication-controller-6dg6n, resource: bindings, ignored listing per whitelist Aug 23 09:45:04.442: INFO: namespace e2e-tests-replication-controller-6dg6n deletion completed in 8.178605029s • [SLOW TEST:13.389 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 23 09:45:04.442: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Aug 23 09:45:04.732: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 23 09:45:05.846: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-custom-resource-definition-lvm8k" for this suite. Aug 23 09:45:11.867: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 23 09:45:11.930: INFO: namespace: e2e-tests-custom-resource-definition-lvm8k, resource: bindings, ignored listing per whitelist Aug 23 09:45:11.943: INFO: namespace e2e-tests-custom-resource-definition-lvm8k deletion completed in 6.092510745s • [SLOW TEST:7.501 seconds] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 Simple CustomResourceDefinition /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:35 creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 23 09:45:11.943: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating the pod Aug 23 09:45:16.559: INFO: Successfully updated pod "labelsupdate56c095f4-e525-11ea-87d5-0242ac11000a" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 23 09:45:18.815: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-qdlbr" for this suite. Aug 23 09:45:41.002: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 23 09:45:41.037: INFO: namespace: e2e-tests-downward-api-qdlbr, resource: bindings, ignored listing per whitelist Aug 23 09:45:41.071: INFO: namespace e2e-tests-downward-api-qdlbr deletion completed in 22.195616432s • [SLOW TEST:29.128 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 23 09:45:41.072: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-map-68844b87-e525-11ea-87d5-0242ac11000a STEP: Creating a pod to test consume configMaps Aug 23 09:45:42.127: INFO: Waiting up to 5m0s for pod "pod-configmaps-6884d875-e525-11ea-87d5-0242ac11000a" in namespace "e2e-tests-configmap-kv4cv" to be "success or failure" Aug 23 09:45:42.189: INFO: Pod "pod-configmaps-6884d875-e525-11ea-87d5-0242ac11000a": Phase="Pending", Reason="", readiness=false. Elapsed: 62.1748ms Aug 23 09:45:44.502: INFO: Pod "pod-configmaps-6884d875-e525-11ea-87d5-0242ac11000a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.375272687s Aug 23 09:45:46.567: INFO: Pod "pod-configmaps-6884d875-e525-11ea-87d5-0242ac11000a": Phase="Running", Reason="", readiness=true. Elapsed: 4.440355911s Aug 23 09:45:48.570: INFO: Pod "pod-configmaps-6884d875-e525-11ea-87d5-0242ac11000a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.443377255s STEP: Saw pod success Aug 23 09:45:48.570: INFO: Pod "pod-configmaps-6884d875-e525-11ea-87d5-0242ac11000a" satisfied condition "success or failure" Aug 23 09:45:48.572: INFO: Trying to get logs from node hunter-worker2 pod pod-configmaps-6884d875-e525-11ea-87d5-0242ac11000a container configmap-volume-test: STEP: delete the pod Aug 23 09:45:49.163: INFO: Waiting for pod pod-configmaps-6884d875-e525-11ea-87d5-0242ac11000a to disappear Aug 23 09:45:49.174: INFO: Pod pod-configmaps-6884d875-e525-11ea-87d5-0242ac11000a no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 23 09:45:49.174: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-kv4cv" for this suite. Aug 23 09:45:55.274: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 23 09:45:55.317: INFO: namespace: e2e-tests-configmap-kv4cv, resource: bindings, ignored listing per whitelist Aug 23 09:45:55.338: INFO: namespace e2e-tests-configmap-kv4cv deletion completed in 6.160829682s • [SLOW TEST:14.266 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 23 09:45:55.338: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-70bc16f9-e525-11ea-87d5-0242ac11000a STEP: Creating a pod to test consume secrets Aug 23 09:45:55.747: INFO: Waiting up to 5m0s for pod "pod-secrets-70cf79c3-e525-11ea-87d5-0242ac11000a" in namespace "e2e-tests-secrets-5twch" to be "success or failure" Aug 23 09:45:55.933: INFO: Pod "pod-secrets-70cf79c3-e525-11ea-87d5-0242ac11000a": Phase="Pending", Reason="", readiness=false. Elapsed: 186.64758ms Aug 23 09:45:57.937: INFO: Pod "pod-secrets-70cf79c3-e525-11ea-87d5-0242ac11000a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.190612225s Aug 23 09:45:59.950: INFO: Pod "pod-secrets-70cf79c3-e525-11ea-87d5-0242ac11000a": Phase="Running", Reason="", readiness=true. Elapsed: 4.203919197s Aug 23 09:46:02.239: INFO: Pod "pod-secrets-70cf79c3-e525-11ea-87d5-0242ac11000a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.492138681s STEP: Saw pod success Aug 23 09:46:02.239: INFO: Pod "pod-secrets-70cf79c3-e525-11ea-87d5-0242ac11000a" satisfied condition "success or failure" Aug 23 09:46:02.242: INFO: Trying to get logs from node hunter-worker pod pod-secrets-70cf79c3-e525-11ea-87d5-0242ac11000a container secret-volume-test: STEP: delete the pod Aug 23 09:46:02.449: INFO: Waiting for pod pod-secrets-70cf79c3-e525-11ea-87d5-0242ac11000a to disappear Aug 23 09:46:02.478: INFO: Pod pod-secrets-70cf79c3-e525-11ea-87d5-0242ac11000a no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 23 09:46:02.478: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-5twch" for this suite. Aug 23 09:46:08.645: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 23 09:46:08.722: INFO: namespace: e2e-tests-secrets-5twch, resource: bindings, ignored listing per whitelist Aug 23 09:46:08.724: INFO: namespace e2e-tests-secrets-5twch deletion completed in 6.241359196s • [SLOW TEST:13.386 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 23 09:46:08.724: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0644 on tmpfs Aug 23 09:46:08.867: INFO: Waiting up to 5m0s for pod "pod-78a051a4-e525-11ea-87d5-0242ac11000a" in namespace "e2e-tests-emptydir-hzl9z" to be "success or failure" Aug 23 09:46:08.885: INFO: Pod "pod-78a051a4-e525-11ea-87d5-0242ac11000a": Phase="Pending", Reason="", readiness=false. Elapsed: 17.6282ms Aug 23 09:46:10.889: INFO: Pod "pod-78a051a4-e525-11ea-87d5-0242ac11000a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02160687s Aug 23 09:46:13.131: INFO: Pod "pod-78a051a4-e525-11ea-87d5-0242ac11000a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.263428296s Aug 23 09:46:15.215: INFO: Pod "pod-78a051a4-e525-11ea-87d5-0242ac11000a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.347396605s STEP: Saw pod success Aug 23 09:46:15.215: INFO: Pod "pod-78a051a4-e525-11ea-87d5-0242ac11000a" satisfied condition "success or failure" Aug 23 09:46:15.218: INFO: Trying to get logs from node hunter-worker2 pod pod-78a051a4-e525-11ea-87d5-0242ac11000a container test-container: STEP: delete the pod Aug 23 09:46:15.953: INFO: Waiting for pod pod-78a051a4-e525-11ea-87d5-0242ac11000a to disappear Aug 23 09:46:16.304: INFO: Pod pod-78a051a4-e525-11ea-87d5-0242ac11000a no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 23 09:46:16.304: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-hzl9z" for this suite. Aug 23 09:46:26.919: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 23 09:46:26.934: INFO: namespace: e2e-tests-emptydir-hzl9z, resource: bindings, ignored listing per whitelist Aug 23 09:46:26.990: INFO: namespace e2e-tests-emptydir-hzl9z deletion completed in 10.681739412s • [SLOW TEST:18.266 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Proxy server should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 23 09:46:26.991: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: starting the proxy server Aug 23 09:46:27.677: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter' STEP: curling proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 23 09:46:27.760: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-zbfvl" for this suite. Aug 23 09:46:33.894: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 23 09:46:34.434: INFO: namespace: e2e-tests-kubectl-zbfvl, resource: bindings, ignored listing per whitelist Aug 23 09:46:34.595: INFO: namespace e2e-tests-kubectl-zbfvl deletion completed in 6.829627066s • [SLOW TEST:7.604 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Proxy server /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 23 09:46:34.595: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-8890215b-e525-11ea-87d5-0242ac11000a STEP: Creating a pod to test consume configMaps Aug 23 09:46:35.852: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-889be3fd-e525-11ea-87d5-0242ac11000a" in namespace "e2e-tests-projected-bhg9m" to be "success or failure" Aug 23 09:46:35.942: INFO: Pod "pod-projected-configmaps-889be3fd-e525-11ea-87d5-0242ac11000a": Phase="Pending", Reason="", readiness=false. Elapsed: 90.219907ms Aug 23 09:46:37.946: INFO: Pod "pod-projected-configmaps-889be3fd-e525-11ea-87d5-0242ac11000a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.094544232s Aug 23 09:46:39.951: INFO: Pod "pod-projected-configmaps-889be3fd-e525-11ea-87d5-0242ac11000a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.098882757s Aug 23 09:46:41.956: INFO: Pod "pod-projected-configmaps-889be3fd-e525-11ea-87d5-0242ac11000a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.104452908s STEP: Saw pod success Aug 23 09:46:41.956: INFO: Pod "pod-projected-configmaps-889be3fd-e525-11ea-87d5-0242ac11000a" satisfied condition "success or failure" Aug 23 09:46:41.960: INFO: Trying to get logs from node hunter-worker pod pod-projected-configmaps-889be3fd-e525-11ea-87d5-0242ac11000a container projected-configmap-volume-test: STEP: delete the pod Aug 23 09:46:41.999: INFO: Waiting for pod pod-projected-configmaps-889be3fd-e525-11ea-87d5-0242ac11000a to disappear Aug 23 09:46:42.022: INFO: Pod pod-projected-configmaps-889be3fd-e525-11ea-87d5-0242ac11000a no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 23 09:46:42.022: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-bhg9m" for this suite. Aug 23 09:46:48.037: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 23 09:46:48.076: INFO: namespace: e2e-tests-projected-bhg9m, resource: bindings, ignored listing per whitelist Aug 23 09:46:48.109: INFO: namespace e2e-tests-projected-bhg9m deletion completed in 6.083783506s • [SLOW TEST:13.514 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 23 09:46:48.109: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-sbzkv STEP: creating a selector STEP: Creating the service pods in kubernetes Aug 23 09:46:48.233: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Aug 23 09:47:14.935: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.72:8080/dial?request=hostName&protocol=udp&host=10.244.1.68&port=8081&tries=1'] Namespace:e2e-tests-pod-network-test-sbzkv PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 23 09:47:14.935: INFO: >>> kubeConfig: /root/.kube/config I0823 09:47:14.986903 6 log.go:172] (0xc000aa3080) (0xc001217220) Create stream I0823 09:47:14.986929 6 log.go:172] (0xc000aa3080) (0xc001217220) Stream added, broadcasting: 1 I0823 09:47:14.997370 6 log.go:172] (0xc000aa3080) Reply frame received for 1 I0823 09:47:14.997431 6 log.go:172] (0xc000aa3080) (0xc0013cac80) Create stream I0823 09:47:14.997444 6 log.go:172] (0xc000aa3080) (0xc0013cac80) Stream added, broadcasting: 3 I0823 09:47:14.998368 6 log.go:172] (0xc000aa3080) Reply frame received for 3 I0823 09:47:14.998412 6 log.go:172] (0xc000aa3080) (0xc0009f0780) Create stream I0823 09:47:14.998424 6 log.go:172] (0xc000aa3080) (0xc0009f0780) Stream added, broadcasting: 5 I0823 09:47:14.999117 6 log.go:172] (0xc000aa3080) Reply frame received for 5 I0823 09:47:15.071539 6 log.go:172] (0xc000aa3080) Data frame received for 3 I0823 09:47:15.071572 6 log.go:172] (0xc0013cac80) (3) Data frame handling I0823 09:47:15.071598 6 log.go:172] (0xc0013cac80) (3) Data frame sent I0823 09:47:15.072182 6 log.go:172] (0xc000aa3080) Data frame received for 3 I0823 09:47:15.072216 6 log.go:172] (0xc0013cac80) (3) Data frame handling I0823 09:47:15.072330 6 log.go:172] (0xc000aa3080) Data frame received for 5 I0823 09:47:15.072368 6 log.go:172] (0xc0009f0780) (5) Data frame handling I0823 09:47:15.073991 6 log.go:172] (0xc000aa3080) Data frame received for 1 I0823 09:47:15.074023 6 log.go:172] (0xc001217220) (1) Data frame handling I0823 09:47:15.074047 6 log.go:172] (0xc001217220) (1) Data frame sent I0823 09:47:15.074067 6 log.go:172] (0xc000aa3080) (0xc001217220) Stream removed, broadcasting: 1 I0823 09:47:15.074088 6 log.go:172] (0xc000aa3080) Go away received I0823 09:47:15.074250 6 log.go:172] (0xc000aa3080) (0xc001217220) Stream removed, broadcasting: 1 I0823 09:47:15.074267 6 log.go:172] (0xc000aa3080) (0xc0013cac80) Stream removed, broadcasting: 3 I0823 09:47:15.074274 6 log.go:172] (0xc000aa3080) (0xc0009f0780) Stream removed, broadcasting: 5 Aug 23 09:47:15.074: INFO: Waiting for endpoints: map[] Aug 23 09:47:15.120: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.72:8080/dial?request=hostName&protocol=udp&host=10.244.2.71&port=8081&tries=1'] Namespace:e2e-tests-pod-network-test-sbzkv PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 23 09:47:15.120: INFO: >>> kubeConfig: /root/.kube/config I0823 09:47:15.147461 6 log.go:172] (0xc0015802c0) (0xc0009f0a00) Create stream I0823 09:47:15.147494 6 log.go:172] (0xc0015802c0) (0xc0009f0a00) Stream added, broadcasting: 1 I0823 09:47:15.149356 6 log.go:172] (0xc0015802c0) Reply frame received for 1 I0823 09:47:15.149406 6 log.go:172] (0xc0015802c0) (0xc001ce8b40) Create stream I0823 09:47:15.149419 6 log.go:172] (0xc0015802c0) (0xc001ce8b40) Stream added, broadcasting: 3 I0823 09:47:15.150108 6 log.go:172] (0xc0015802c0) Reply frame received for 3 I0823 09:47:15.150148 6 log.go:172] (0xc0015802c0) (0xc0012172c0) Create stream I0823 09:47:15.150164 6 log.go:172] (0xc0015802c0) (0xc0012172c0) Stream added, broadcasting: 5 I0823 09:47:15.150938 6 log.go:172] (0xc0015802c0) Reply frame received for 5 I0823 09:47:15.211241 6 log.go:172] (0xc0015802c0) Data frame received for 3 I0823 09:47:15.211266 6 log.go:172] (0xc001ce8b40) (3) Data frame handling I0823 09:47:15.211288 6 log.go:172] (0xc001ce8b40) (3) Data frame sent I0823 09:47:15.212104 6 log.go:172] (0xc0015802c0) Data frame received for 3 I0823 09:47:15.212127 6 log.go:172] (0xc001ce8b40) (3) Data frame handling I0823 09:47:15.212144 6 log.go:172] (0xc0015802c0) Data frame received for 5 I0823 09:47:15.212155 6 log.go:172] (0xc0012172c0) (5) Data frame handling I0823 09:47:15.213702 6 log.go:172] (0xc0015802c0) Data frame received for 1 I0823 09:47:15.213718 6 log.go:172] (0xc0009f0a00) (1) Data frame handling I0823 09:47:15.213731 6 log.go:172] (0xc0009f0a00) (1) Data frame sent I0823 09:47:15.213744 6 log.go:172] (0xc0015802c0) (0xc0009f0a00) Stream removed, broadcasting: 1 I0823 09:47:15.213812 6 log.go:172] (0xc0015802c0) (0xc0009f0a00) Stream removed, broadcasting: 1 I0823 09:47:15.213830 6 log.go:172] (0xc0015802c0) (0xc001ce8b40) Stream removed, broadcasting: 3 I0823 09:47:15.213838 6 log.go:172] (0xc0015802c0) (0xc0012172c0) Stream removed, broadcasting: 5 I0823 09:47:15.213855 6 log.go:172] (0xc0015802c0) Go away received Aug 23 09:47:15.213: INFO: Waiting for endpoints: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 23 09:47:15.213: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pod-network-test-sbzkv" for this suite. Aug 23 09:47:39.265: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 23 09:47:39.322: INFO: namespace: e2e-tests-pod-network-test-sbzkv, resource: bindings, ignored listing per whitelist Aug 23 09:47:39.339: INFO: namespace e2e-tests-pod-network-test-sbzkv deletion completed in 24.120403573s • [SLOW TEST:51.230 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 23 09:47:39.339: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with secret pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod pod-subpath-test-secret-2jbh STEP: Creating a pod to test atomic-volume-subpath Aug 23 09:47:40.881: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-2jbh" in namespace "e2e-tests-subpath-mvgmb" to be "success or failure" Aug 23 09:47:41.048: INFO: Pod "pod-subpath-test-secret-2jbh": Phase="Pending", Reason="", readiness=false. Elapsed: 166.668174ms Aug 23 09:47:43.191: INFO: Pod "pod-subpath-test-secret-2jbh": Phase="Pending", Reason="", readiness=false. Elapsed: 2.310187413s Aug 23 09:47:45.252: INFO: Pod "pod-subpath-test-secret-2jbh": Phase="Pending", Reason="", readiness=false. Elapsed: 4.370355492s Aug 23 09:47:47.323: INFO: Pod "pod-subpath-test-secret-2jbh": Phase="Pending", Reason="", readiness=false. Elapsed: 6.441649183s Aug 23 09:47:49.327: INFO: Pod "pod-subpath-test-secret-2jbh": Phase="Pending", Reason="", readiness=false. Elapsed: 8.44577423s Aug 23 09:47:51.765: INFO: Pod "pod-subpath-test-secret-2jbh": Phase="Pending", Reason="", readiness=false. Elapsed: 10.883545473s Aug 23 09:47:53.910: INFO: Pod "pod-subpath-test-secret-2jbh": Phase="Running", Reason="", readiness=false. Elapsed: 13.02865145s Aug 23 09:47:55.969: INFO: Pod "pod-subpath-test-secret-2jbh": Phase="Running", Reason="", readiness=false. Elapsed: 15.087692456s Aug 23 09:47:57.973: INFO: Pod "pod-subpath-test-secret-2jbh": Phase="Running", Reason="", readiness=false. Elapsed: 17.091810384s Aug 23 09:47:59.977: INFO: Pod "pod-subpath-test-secret-2jbh": Phase="Running", Reason="", readiness=false. Elapsed: 19.095556205s Aug 23 09:48:01.981: INFO: Pod "pod-subpath-test-secret-2jbh": Phase="Running", Reason="", readiness=false. Elapsed: 21.100099518s Aug 23 09:48:03.985: INFO: Pod "pod-subpath-test-secret-2jbh": Phase="Running", Reason="", readiness=false. Elapsed: 23.103697437s Aug 23 09:48:05.988: INFO: Pod "pod-subpath-test-secret-2jbh": Phase="Running", Reason="", readiness=false. Elapsed: 25.106492531s Aug 23 09:48:07.991: INFO: Pod "pod-subpath-test-secret-2jbh": Phase="Running", Reason="", readiness=false. Elapsed: 27.110114778s Aug 23 09:48:09.994: INFO: Pod "pod-subpath-test-secret-2jbh": Phase="Running", Reason="", readiness=false. Elapsed: 29.113065894s Aug 23 09:48:12.155: INFO: Pod "pod-subpath-test-secret-2jbh": Phase="Succeeded", Reason="", readiness=false. Elapsed: 31.274137527s STEP: Saw pod success Aug 23 09:48:12.155: INFO: Pod "pod-subpath-test-secret-2jbh" satisfied condition "success or failure" Aug 23 09:48:12.158: INFO: Trying to get logs from node hunter-worker pod pod-subpath-test-secret-2jbh container test-container-subpath-secret-2jbh: STEP: delete the pod Aug 23 09:48:12.696: INFO: Waiting for pod pod-subpath-test-secret-2jbh to disappear Aug 23 09:48:12.724: INFO: Pod pod-subpath-test-secret-2jbh no longer exists STEP: Deleting pod pod-subpath-test-secret-2jbh Aug 23 09:48:12.724: INFO: Deleting pod "pod-subpath-test-secret-2jbh" in namespace "e2e-tests-subpath-mvgmb" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 23 09:48:12.726: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-subpath-mvgmb" for this suite. Aug 23 09:48:18.787: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 23 09:48:18.831: INFO: namespace: e2e-tests-subpath-mvgmb, resource: bindings, ignored listing per whitelist Aug 23 09:48:18.885: INFO: namespace e2e-tests-subpath-mvgmb deletion completed in 6.156804231s • [SLOW TEST:39.546 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with secret pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 23 09:48:18.886: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Aug 23 09:48:27.044: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Aug 23 09:48:27.180: INFO: Pod pod-with-prestop-exec-hook still exists Aug 23 09:48:29.180: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Aug 23 09:48:29.182: INFO: Pod pod-with-prestop-exec-hook still exists Aug 23 09:48:31.180: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Aug 23 09:48:31.183: INFO: Pod pod-with-prestop-exec-hook still exists Aug 23 09:48:33.180: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Aug 23 09:48:33.183: INFO: Pod pod-with-prestop-exec-hook still exists Aug 23 09:48:35.180: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Aug 23 09:48:35.183: INFO: Pod pod-with-prestop-exec-hook still exists Aug 23 09:48:37.180: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Aug 23 09:48:37.183: INFO: Pod pod-with-prestop-exec-hook still exists Aug 23 09:48:39.180: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Aug 23 09:48:39.183: INFO: Pod pod-with-prestop-exec-hook still exists Aug 23 09:48:41.180: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Aug 23 09:48:41.183: INFO: Pod pod-with-prestop-exec-hook still exists Aug 23 09:48:43.180: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Aug 23 09:48:43.183: INFO: Pod pod-with-prestop-exec-hook still exists Aug 23 09:48:45.180: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Aug 23 09:48:45.182: INFO: Pod pod-with-prestop-exec-hook still exists Aug 23 09:48:47.180: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Aug 23 09:48:47.183: INFO: Pod pod-with-prestop-exec-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 23 09:48:47.189: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-6n6t5" for this suite. Aug 23 09:49:09.207: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 23 09:49:09.241: INFO: namespace: e2e-tests-container-lifecycle-hook-6n6t5, resource: bindings, ignored listing per whitelist Aug 23 09:49:09.267: INFO: namespace e2e-tests-container-lifecycle-hook-6n6t5 deletion completed in 22.074089745s • [SLOW TEST:50.381 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40 should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 23 09:49:09.267: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-map-e434e462-e525-11ea-87d5-0242ac11000a STEP: Creating a pod to test consume configMaps Aug 23 09:49:09.422: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-e43e7bd9-e525-11ea-87d5-0242ac11000a" in namespace "e2e-tests-projected-pm8cm" to be "success or failure" Aug 23 09:49:09.444: INFO: Pod "pod-projected-configmaps-e43e7bd9-e525-11ea-87d5-0242ac11000a": Phase="Pending", Reason="", readiness=false. Elapsed: 21.654018ms Aug 23 09:49:11.447: INFO: Pod "pod-projected-configmaps-e43e7bd9-e525-11ea-87d5-0242ac11000a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024839777s Aug 23 09:49:13.450: INFO: Pod "pod-projected-configmaps-e43e7bd9-e525-11ea-87d5-0242ac11000a": Phase="Running", Reason="", readiness=true. Elapsed: 4.028183546s Aug 23 09:49:15.455: INFO: Pod "pod-projected-configmaps-e43e7bd9-e525-11ea-87d5-0242ac11000a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.033047994s STEP: Saw pod success Aug 23 09:49:15.455: INFO: Pod "pod-projected-configmaps-e43e7bd9-e525-11ea-87d5-0242ac11000a" satisfied condition "success or failure" Aug 23 09:49:15.457: INFO: Trying to get logs from node hunter-worker2 pod pod-projected-configmaps-e43e7bd9-e525-11ea-87d5-0242ac11000a container projected-configmap-volume-test: STEP: delete the pod Aug 23 09:49:15.487: INFO: Waiting for pod pod-projected-configmaps-e43e7bd9-e525-11ea-87d5-0242ac11000a to disappear Aug 23 09:49:15.504: INFO: Pod pod-projected-configmaps-e43e7bd9-e525-11ea-87d5-0242ac11000a no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 23 09:49:15.504: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-pm8cm" for this suite. Aug 23 09:49:21.525: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 23 09:49:21.549: INFO: namespace: e2e-tests-projected-pm8cm, resource: bindings, ignored listing per whitelist Aug 23 09:49:21.578: INFO: namespace e2e-tests-projected-pm8cm deletion completed in 6.070947271s • [SLOW TEST:12.311 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 23 09:49:21.578: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating secret e2e-tests-secrets-hdbsq/secret-test-eb8ee748-e525-11ea-87d5-0242ac11000a STEP: Creating a pod to test consume secrets Aug 23 09:49:21.698: INFO: Waiting up to 5m0s for pod "pod-configmaps-eb90c745-e525-11ea-87d5-0242ac11000a" in namespace "e2e-tests-secrets-hdbsq" to be "success or failure" Aug 23 09:49:21.702: INFO: Pod "pod-configmaps-eb90c745-e525-11ea-87d5-0242ac11000a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.193478ms Aug 23 09:49:23.706: INFO: Pod "pod-configmaps-eb90c745-e525-11ea-87d5-0242ac11000a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007886167s Aug 23 09:49:25.710: INFO: Pod "pod-configmaps-eb90c745-e525-11ea-87d5-0242ac11000a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011428744s STEP: Saw pod success Aug 23 09:49:25.710: INFO: Pod "pod-configmaps-eb90c745-e525-11ea-87d5-0242ac11000a" satisfied condition "success or failure" Aug 23 09:49:25.712: INFO: Trying to get logs from node hunter-worker pod pod-configmaps-eb90c745-e525-11ea-87d5-0242ac11000a container env-test: STEP: delete the pod Aug 23 09:49:25.774: INFO: Waiting for pod pod-configmaps-eb90c745-e525-11ea-87d5-0242ac11000a to disappear Aug 23 09:49:25.786: INFO: Pod pod-configmaps-eb90c745-e525-11ea-87d5-0242ac11000a no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 23 09:49:25.786: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-hdbsq" for this suite. Aug 23 09:49:31.800: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 23 09:49:31.807: INFO: namespace: e2e-tests-secrets-hdbsq, resource: bindings, ignored listing per whitelist Aug 23 09:49:31.885: INFO: namespace e2e-tests-secrets-hdbsq deletion completed in 6.092097021s • [SLOW TEST:10.307 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:32 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-storage] HostPath should give a volume the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 23 09:49:31.885: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename hostpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37 [It] should give a volume the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test hostPath mode Aug 23 09:49:32.033: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "e2e-tests-hostpath-q5j7w" to be "success or failure" Aug 23 09:49:32.053: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 19.346678ms Aug 23 09:49:34.072: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.039055337s Aug 23 09:49:36.372: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.338843962s Aug 23 09:49:38.670: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 6.636627678s Aug 23 09:49:40.675: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.641434554s STEP: Saw pod success Aug 23 09:49:40.675: INFO: Pod "pod-host-path-test" satisfied condition "success or failure" Aug 23 09:49:40.677: INFO: Trying to get logs from node hunter-worker pod pod-host-path-test container test-container-1: STEP: delete the pod Aug 23 09:49:40.715: INFO: Waiting for pod pod-host-path-test to disappear Aug 23 09:49:40.727: INFO: Pod pod-host-path-test no longer exists [AfterEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 23 09:49:40.727: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-hostpath-q5j7w" for this suite. Aug 23 09:49:46.742: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 23 09:49:46.779: INFO: namespace: e2e-tests-hostpath-q5j7w, resource: bindings, ignored listing per whitelist Aug 23 09:49:46.824: INFO: namespace e2e-tests-hostpath-q5j7w deletion completed in 6.093892735s • [SLOW TEST:14.939 seconds] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34 should give a volume the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 23 09:49:46.825: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-faa62aea-e525-11ea-87d5-0242ac11000a STEP: Creating a pod to test consume secrets Aug 23 09:49:47.016: INFO: Waiting up to 5m0s for pod "pod-secrets-faa7fed3-e525-11ea-87d5-0242ac11000a" in namespace "e2e-tests-secrets-srqr7" to be "success or failure" Aug 23 09:49:47.035: INFO: Pod "pod-secrets-faa7fed3-e525-11ea-87d5-0242ac11000a": Phase="Pending", Reason="", readiness=false. Elapsed: 18.584393ms Aug 23 09:49:49.157: INFO: Pod "pod-secrets-faa7fed3-e525-11ea-87d5-0242ac11000a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.140774978s Aug 23 09:49:51.213: INFO: Pod "pod-secrets-faa7fed3-e525-11ea-87d5-0242ac11000a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.196671325s STEP: Saw pod success Aug 23 09:49:51.213: INFO: Pod "pod-secrets-faa7fed3-e525-11ea-87d5-0242ac11000a" satisfied condition "success or failure" Aug 23 09:49:51.215: INFO: Trying to get logs from node hunter-worker2 pod pod-secrets-faa7fed3-e525-11ea-87d5-0242ac11000a container secret-volume-test: STEP: delete the pod Aug 23 09:49:51.231: INFO: Waiting for pod pod-secrets-faa7fed3-e525-11ea-87d5-0242ac11000a to disappear Aug 23 09:49:51.236: INFO: Pod pod-secrets-faa7fed3-e525-11ea-87d5-0242ac11000a no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 23 09:49:51.236: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-srqr7" for this suite. Aug 23 09:49:57.293: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 23 09:49:57.360: INFO: namespace: e2e-tests-secrets-srqr7, resource: bindings, ignored listing per whitelist Aug 23 09:49:57.364: INFO: namespace e2e-tests-secrets-srqr7 deletion completed in 6.12474479s • [SLOW TEST:10.540 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 23 09:49:57.364: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Aug 23 09:49:57.487: INFO: Waiting up to 5m0s for pod "downwardapi-volume-00e5fcf4-e526-11ea-87d5-0242ac11000a" in namespace "e2e-tests-projected-dd9vz" to be "success or failure" Aug 23 09:49:57.507: INFO: Pod "downwardapi-volume-00e5fcf4-e526-11ea-87d5-0242ac11000a": Phase="Pending", Reason="", readiness=false. Elapsed: 20.092872ms Aug 23 09:49:59.510: INFO: Pod "downwardapi-volume-00e5fcf4-e526-11ea-87d5-0242ac11000a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023056951s Aug 23 09:50:01.514: INFO: Pod "downwardapi-volume-00e5fcf4-e526-11ea-87d5-0242ac11000a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.027535478s STEP: Saw pod success Aug 23 09:50:01.515: INFO: Pod "downwardapi-volume-00e5fcf4-e526-11ea-87d5-0242ac11000a" satisfied condition "success or failure" Aug 23 09:50:01.520: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-00e5fcf4-e526-11ea-87d5-0242ac11000a container client-container: STEP: delete the pod Aug 23 09:50:01.589: INFO: Waiting for pod downwardapi-volume-00e5fcf4-e526-11ea-87d5-0242ac11000a to disappear Aug 23 09:50:01.602: INFO: Pod downwardapi-volume-00e5fcf4-e526-11ea-87d5-0242ac11000a no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 23 09:50:01.602: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-dd9vz" for this suite. Aug 23 09:50:07.612: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 23 09:50:07.777: INFO: namespace: e2e-tests-projected-dd9vz, resource: bindings, ignored listing per whitelist Aug 23 09:50:07.822: INFO: namespace e2e-tests-projected-dd9vz deletion completed in 6.217810708s • [SLOW TEST:10.458 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] Downward API volume should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 23 09:50:07.822: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Aug 23 09:50:08.183: INFO: Waiting up to 5m0s for pod "downwardapi-volume-073ad806-e526-11ea-87d5-0242ac11000a" in namespace "e2e-tests-downward-api-prnz7" to be "success or failure" Aug 23 09:50:08.195: INFO: Pod "downwardapi-volume-073ad806-e526-11ea-87d5-0242ac11000a": Phase="Pending", Reason="", readiness=false. Elapsed: 12.44111ms Aug 23 09:50:10.264: INFO: Pod "downwardapi-volume-073ad806-e526-11ea-87d5-0242ac11000a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.081381097s Aug 23 09:50:12.360: INFO: Pod "downwardapi-volume-073ad806-e526-11ea-87d5-0242ac11000a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.177147885s STEP: Saw pod success Aug 23 09:50:12.360: INFO: Pod "downwardapi-volume-073ad806-e526-11ea-87d5-0242ac11000a" satisfied condition "success or failure" Aug 23 09:50:12.363: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-073ad806-e526-11ea-87d5-0242ac11000a container client-container: STEP: delete the pod Aug 23 09:50:12.452: INFO: Waiting for pod downwardapi-volume-073ad806-e526-11ea-87d5-0242ac11000a to disappear Aug 23 09:50:12.504: INFO: Pod downwardapi-volume-073ad806-e526-11ea-87d5-0242ac11000a no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 23 09:50:12.504: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-prnz7" for this suite. Aug 23 09:50:24.551: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 23 09:50:24.624: INFO: namespace: e2e-tests-downward-api-prnz7, resource: bindings, ignored listing per whitelist Aug 23 09:50:24.627: INFO: namespace e2e-tests-downward-api-prnz7 deletion completed in 12.119585873s • [SLOW TEST:16.804 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 23 09:50:24.627: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0666 on tmpfs Aug 23 09:50:26.164: INFO: Waiting up to 5m0s for pod "pod-11cad996-e526-11ea-87d5-0242ac11000a" in namespace "e2e-tests-emptydir-f4d76" to be "success or failure" Aug 23 09:50:26.361: INFO: Pod "pod-11cad996-e526-11ea-87d5-0242ac11000a": Phase="Pending", Reason="", readiness=false. Elapsed: 196.982313ms Aug 23 09:50:28.364: INFO: Pod "pod-11cad996-e526-11ea-87d5-0242ac11000a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.200354974s Aug 23 09:50:30.576: INFO: Pod "pod-11cad996-e526-11ea-87d5-0242ac11000a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.41190127s Aug 23 09:50:32.666: INFO: Pod "pod-11cad996-e526-11ea-87d5-0242ac11000a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.501969102s STEP: Saw pod success Aug 23 09:50:32.666: INFO: Pod "pod-11cad996-e526-11ea-87d5-0242ac11000a" satisfied condition "success or failure" Aug 23 09:50:32.668: INFO: Trying to get logs from node hunter-worker pod pod-11cad996-e526-11ea-87d5-0242ac11000a container test-container: STEP: delete the pod Aug 23 09:50:32.847: INFO: Waiting for pod pod-11cad996-e526-11ea-87d5-0242ac11000a to disappear Aug 23 09:50:32.890: INFO: Pod pod-11cad996-e526-11ea-87d5-0242ac11000a no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 23 09:50:32.890: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-f4d76" for this suite. Aug 23 09:50:38.917: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 23 09:50:38.948: INFO: namespace: e2e-tests-emptydir-f4d76, resource: bindings, ignored listing per whitelist Aug 23 09:50:38.978: INFO: namespace e2e-tests-emptydir-f4d76 deletion completed in 6.084304562s • [SLOW TEST:14.351 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 23 09:50:38.978: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for all rs to be garbage collected STEP: expected 0 rs, got 1 rs STEP: expected 0 pods, got 2 pods STEP: Gathering metrics W0823 09:50:40.182764 6 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Aug 23 09:50:40.182: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 23 09:50:40.182: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-6wmll" for this suite. Aug 23 09:50:48.205: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 23 09:50:48.254: INFO: namespace: e2e-tests-gc-6wmll, resource: bindings, ignored listing per whitelist Aug 23 09:50:48.264: INFO: namespace e2e-tests-gc-6wmll deletion completed in 8.077797332s • [SLOW TEST:9.286 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 23 09:50:48.264: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:295 [It] should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a replication controller Aug 23 09:50:48.751: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-wqqqq' Aug 23 09:50:55.192: INFO: stderr: "" Aug 23 09:50:55.192: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Aug 23 09:50:55.192: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-wqqqq' Aug 23 09:50:55.305: INFO: stderr: "" Aug 23 09:50:55.305: INFO: stdout: "update-demo-nautilus-8qk77 update-demo-nautilus-jpmbd " Aug 23 09:50:55.305: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8qk77 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-wqqqq' Aug 23 09:50:55.413: INFO: stderr: "" Aug 23 09:50:55.413: INFO: stdout: "" Aug 23 09:50:55.413: INFO: update-demo-nautilus-8qk77 is created but not running Aug 23 09:51:00.414: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-wqqqq' Aug 23 09:51:00.518: INFO: stderr: "" Aug 23 09:51:00.518: INFO: stdout: "update-demo-nautilus-8qk77 update-demo-nautilus-jpmbd " Aug 23 09:51:00.518: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8qk77 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-wqqqq' Aug 23 09:51:00.613: INFO: stderr: "" Aug 23 09:51:00.613: INFO: stdout: "true" Aug 23 09:51:00.613: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8qk77 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-wqqqq' Aug 23 09:51:00.970: INFO: stderr: "" Aug 23 09:51:00.970: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Aug 23 09:51:00.970: INFO: validating pod update-demo-nautilus-8qk77 Aug 23 09:51:01.032: INFO: got data: { "image": "nautilus.jpg" } Aug 23 09:51:01.033: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Aug 23 09:51:01.033: INFO: update-demo-nautilus-8qk77 is verified up and running Aug 23 09:51:01.033: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jpmbd -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-wqqqq' Aug 23 09:51:01.136: INFO: stderr: "" Aug 23 09:51:01.136: INFO: stdout: "true" Aug 23 09:51:01.136: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jpmbd -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-wqqqq' Aug 23 09:51:01.229: INFO: stderr: "" Aug 23 09:51:01.229: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Aug 23 09:51:01.229: INFO: validating pod update-demo-nautilus-jpmbd Aug 23 09:51:01.232: INFO: got data: { "image": "nautilus.jpg" } Aug 23 09:51:01.232: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Aug 23 09:51:01.232: INFO: update-demo-nautilus-jpmbd is verified up and running STEP: scaling down the replication controller Aug 23 09:51:01.233: INFO: scanned /root for discovery docs: Aug 23 09:51:01.233: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=e2e-tests-kubectl-wqqqq' Aug 23 09:51:02.544: INFO: stderr: "" Aug 23 09:51:02.544: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Aug 23 09:51:02.544: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-wqqqq' Aug 23 09:51:02.681: INFO: stderr: "" Aug 23 09:51:02.681: INFO: stdout: "update-demo-nautilus-8qk77 update-demo-nautilus-jpmbd " STEP: Replicas for name=update-demo: expected=1 actual=2 Aug 23 09:51:07.681: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-wqqqq' Aug 23 09:51:07.780: INFO: stderr: "" Aug 23 09:51:07.780: INFO: stdout: "update-demo-nautilus-jpmbd " Aug 23 09:51:07.780: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jpmbd -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-wqqqq' Aug 23 09:51:07.885: INFO: stderr: "" Aug 23 09:51:07.885: INFO: stdout: "true" Aug 23 09:51:07.885: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jpmbd -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-wqqqq' Aug 23 09:51:07.985: INFO: stderr: "" Aug 23 09:51:07.985: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Aug 23 09:51:07.985: INFO: validating pod update-demo-nautilus-jpmbd Aug 23 09:51:07.988: INFO: got data: { "image": "nautilus.jpg" } Aug 23 09:51:07.988: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Aug 23 09:51:07.988: INFO: update-demo-nautilus-jpmbd is verified up and running STEP: scaling up the replication controller Aug 23 09:51:07.989: INFO: scanned /root for discovery docs: Aug 23 09:51:07.989: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=e2e-tests-kubectl-wqqqq' Aug 23 09:51:09.163: INFO: stderr: "" Aug 23 09:51:09.163: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Aug 23 09:51:09.163: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-wqqqq' Aug 23 09:51:09.279: INFO: stderr: "" Aug 23 09:51:09.279: INFO: stdout: "update-demo-nautilus-jpmbd update-demo-nautilus-rbwh6 " Aug 23 09:51:09.279: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jpmbd -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-wqqqq' Aug 23 09:51:09.371: INFO: stderr: "" Aug 23 09:51:09.371: INFO: stdout: "true" Aug 23 09:51:09.371: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jpmbd -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-wqqqq' Aug 23 09:51:09.464: INFO: stderr: "" Aug 23 09:51:09.464: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Aug 23 09:51:09.464: INFO: validating pod update-demo-nautilus-jpmbd Aug 23 09:51:09.467: INFO: got data: { "image": "nautilus.jpg" } Aug 23 09:51:09.467: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Aug 23 09:51:09.467: INFO: update-demo-nautilus-jpmbd is verified up and running Aug 23 09:51:09.467: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-rbwh6 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-wqqqq' Aug 23 09:51:09.564: INFO: stderr: "" Aug 23 09:51:09.564: INFO: stdout: "" Aug 23 09:51:09.564: INFO: update-demo-nautilus-rbwh6 is created but not running Aug 23 09:51:14.564: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-wqqqq' Aug 23 09:51:14.666: INFO: stderr: "" Aug 23 09:51:14.666: INFO: stdout: "update-demo-nautilus-jpmbd update-demo-nautilus-rbwh6 " Aug 23 09:51:14.666: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jpmbd -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-wqqqq' Aug 23 09:51:14.767: INFO: stderr: "" Aug 23 09:51:14.767: INFO: stdout: "true" Aug 23 09:51:14.768: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jpmbd -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-wqqqq' Aug 23 09:51:14.856: INFO: stderr: "" Aug 23 09:51:14.856: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Aug 23 09:51:14.856: INFO: validating pod update-demo-nautilus-jpmbd Aug 23 09:51:14.858: INFO: got data: { "image": "nautilus.jpg" } Aug 23 09:51:14.858: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Aug 23 09:51:14.858: INFO: update-demo-nautilus-jpmbd is verified up and running Aug 23 09:51:14.858: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-rbwh6 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-wqqqq' Aug 23 09:51:14.964: INFO: stderr: "" Aug 23 09:51:14.964: INFO: stdout: "true" Aug 23 09:51:14.964: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-rbwh6 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-wqqqq' Aug 23 09:51:15.063: INFO: stderr: "" Aug 23 09:51:15.063: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Aug 23 09:51:15.063: INFO: validating pod update-demo-nautilus-rbwh6 Aug 23 09:51:15.067: INFO: got data: { "image": "nautilus.jpg" } Aug 23 09:51:15.067: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Aug 23 09:51:15.067: INFO: update-demo-nautilus-rbwh6 is verified up and running STEP: using delete to clean up resources Aug 23 09:51:15.067: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-wqqqq' Aug 23 09:51:15.282: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Aug 23 09:51:15.282: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Aug 23 09:51:15.282: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=e2e-tests-kubectl-wqqqq' Aug 23 09:51:15.423: INFO: stderr: "No resources found.\n" Aug 23 09:51:15.423: INFO: stdout: "" Aug 23 09:51:15.423: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=e2e-tests-kubectl-wqqqq -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Aug 23 09:51:15.524: INFO: stderr: "" Aug 23 09:51:15.524: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 23 09:51:15.524: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-wqqqq" for this suite. Aug 23 09:51:39.566: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 23 09:51:39.605: INFO: namespace: e2e-tests-kubectl-wqqqq, resource: bindings, ignored listing per whitelist Aug 23 09:51:39.617: INFO: namespace e2e-tests-kubectl-wqqqq deletion completed in 24.08909468s • [SLOW TEST:51.353 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 23 09:51:39.618: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 23 09:51:45.743: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubelet-test-2lnkl" for this suite. Aug 23 09:52:31.918: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 23 09:52:31.931: INFO: namespace: e2e-tests-kubelet-test-2lnkl, resource: bindings, ignored listing per whitelist Aug 23 09:52:31.971: INFO: namespace e2e-tests-kubelet-test-2lnkl deletion completed in 46.223530841s • [SLOW TEST:52.354 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when scheduling a busybox command in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:40 should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl replace should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 23 09:52:31.971: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1563 [It] should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine Aug 23 09:52:32.578: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --labels=run=e2e-test-nginx-pod --namespace=e2e-tests-kubectl-9kzdh' Aug 23 09:52:32.676: INFO: stderr: "" Aug 23 09:52:32.676: INFO: stdout: "pod/e2e-test-nginx-pod created\n" STEP: verifying the pod e2e-test-nginx-pod is running STEP: verifying the pod e2e-test-nginx-pod was created Aug 23 09:52:42.727: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod e2e-test-nginx-pod --namespace=e2e-tests-kubectl-9kzdh -o json' Aug 23 09:52:42.812: INFO: stderr: "" Aug 23 09:52:42.812: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"creationTimestamp\": \"2020-08-23T09:52:32Z\",\n \"labels\": {\n \"run\": \"e2e-test-nginx-pod\"\n },\n \"name\": \"e2e-test-nginx-pod\",\n \"namespace\": \"e2e-tests-kubectl-9kzdh\",\n \"resourceVersion\": \"1682805\",\n \"selfLink\": \"/api/v1/namespaces/e2e-tests-kubectl-9kzdh/pods/e2e-test-nginx-pod\",\n \"uid\": \"5d64aae1-e526-11ea-a485-0242ac120004\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"docker.io/library/nginx:1.14-alpine\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-nginx-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"default-token-j9vj4\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"hunter-worker2\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"default-token-j9vj4\",\n \"secret\": {\n \"defaultMode\": 420,\n \"secretName\": \"default-token-j9vj4\"\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-08-23T09:52:32Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-08-23T09:52:38Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-08-23T09:52:38Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-08-23T09:52:32Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"containerd://e1e5611e27b8948da8b95690a5aff446d373fc25c0d37cb8c4e6a7d5d4a0e69d\",\n \"image\": \"docker.io/library/nginx:1.14-alpine\",\n \"imageID\": \"docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7\",\n \"lastState\": {},\n \"name\": \"e2e-test-nginx-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2020-08-23T09:52:37Z\"\n }\n }\n }\n ],\n \"hostIP\": \"172.18.0.8\",\n \"phase\": \"Running\",\n \"podIP\": \"10.244.2.80\",\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2020-08-23T09:52:32Z\"\n }\n}\n" STEP: replace the image in the pod Aug 23 09:52:42.812: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config replace -f - --namespace=e2e-tests-kubectl-9kzdh' Aug 23 09:52:43.763: INFO: stderr: "" Aug 23 09:52:43.763: INFO: stdout: "pod/e2e-test-nginx-pod replaced\n" STEP: verifying the pod e2e-test-nginx-pod has the right image docker.io/library/busybox:1.29 [AfterEach] [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1568 Aug 23 09:52:43.850: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=e2e-tests-kubectl-9kzdh' Aug 23 09:52:47.196: INFO: stderr: "" Aug 23 09:52:47.196: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 23 09:52:47.196: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-9kzdh" for this suite. Aug 23 09:52:53.209: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 23 09:52:53.237: INFO: namespace: e2e-tests-kubectl-9kzdh, resource: bindings, ignored listing per whitelist Aug 23 09:52:53.270: INFO: namespace e2e-tests-kubectl-9kzdh deletion completed in 6.072312602s • [SLOW TEST:21.299 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 23 09:52:53.270: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod STEP: setting up watch STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: verifying pod creation was observed Aug 23 09:53:01.438: INFO: running pod: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-submit-remove-69ba542c-e526-11ea-87d5-0242ac11000a", GenerateName:"", Namespace:"e2e-tests-pods-sjgzb", SelfLink:"/api/v1/namespaces/e2e-tests-pods-sjgzb/pods/pod-submit-remove-69ba542c-e526-11ea-87d5-0242ac11000a", UID:"69be43e4-e526-11ea-a485-0242ac120004", ResourceVersion:"1682879", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63733773173, loc:(*time.Location)(0x7950ac0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"355837637"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-pn68p", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc001ce40c0), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"nginx", Image:"docker.io/library/nginx:1.14-alpine", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-pn68p", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc001442c58), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"hunter-worker", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc00194d860), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc001442ca0)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc001442cc0)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc001442cc8), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc001442ccc)}, Status:v1.PodStatus{Phase:"Running", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733773173, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"Ready", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733773179, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"ContainersReady", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733773179, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733773173, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.18.0.2", PodIP:"10.244.1.78", StartTime:(*v1.Time)(0xc000971b60), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"nginx", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(0xc000971b80), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:true, RestartCount:0, Image:"docker.io/library/nginx:1.14-alpine", ImageID:"docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7", ContainerID:"containerd://037df86afc2cdba031d1bec56aa787d570941f86e78f8e2e2170c9c5f983b22b"}}, QOSClass:"BestEffort"}} STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice Aug 23 09:53:06.460: INFO: no pod exists with the name we were looking for, assuming the termination request was observed and completed STEP: verifying pod deletion was observed [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 23 09:53:06.462: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-sjgzb" for this suite. Aug 23 09:53:12.524: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 23 09:53:12.635: INFO: namespace: e2e-tests-pods-sjgzb, resource: bindings, ignored listing per whitelist Aug 23 09:53:12.650: INFO: namespace e2e-tests-pods-sjgzb deletion completed in 6.18558968s • [SLOW TEST:19.380 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-apps] Deployment deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 23 09:53:12.651: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65 [It] deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Aug 23 09:53:12.784: INFO: Pod name rollover-pod: Found 0 pods out of 1 Aug 23 09:53:17.788: INFO: Pod name rollover-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Aug 23 09:53:17.788: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready Aug 23 09:53:19.792: INFO: Creating deployment "test-rollover-deployment" Aug 23 09:53:19.810: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations Aug 23 09:53:22.279: INFO: Check revision of new replica set for deployment "test-rollover-deployment" Aug 23 09:53:22.800: INFO: Ensure that both replica sets have 1 created replica Aug 23 09:53:22.865: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update Aug 23 09:53:22.872: INFO: Updating deployment test-rollover-deployment Aug 23 09:53:22.872: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller Aug 23 09:53:25.019: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 Aug 23 09:53:25.024: INFO: Make sure deployment "test-rollover-deployment" is complete Aug 23 09:53:25.030: INFO: all replica sets need to contain the pod-template-hash label Aug 23 09:53:25.030: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733773199, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733773199, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733773203, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733773199, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 23 09:53:27.039: INFO: all replica sets need to contain the pod-template-hash label Aug 23 09:53:27.039: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733773199, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733773199, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733773203, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733773199, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 23 09:53:29.065: INFO: all replica sets need to contain the pod-template-hash label Aug 23 09:53:29.065: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733773199, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733773199, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733773207, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733773199, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 23 09:53:31.077: INFO: all replica sets need to contain the pod-template-hash label Aug 23 09:53:31.078: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733773199, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733773199, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733773207, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733773199, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 23 09:53:33.037: INFO: all replica sets need to contain the pod-template-hash label Aug 23 09:53:33.037: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733773199, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733773199, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733773207, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733773199, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 23 09:53:35.287: INFO: all replica sets need to contain the pod-template-hash label Aug 23 09:53:35.287: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733773199, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733773199, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733773207, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733773199, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 23 09:53:37.075: INFO: all replica sets need to contain the pod-template-hash label Aug 23 09:53:37.075: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733773199, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733773199, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733773207, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733773199, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 23 09:53:39.322: INFO: Aug 23 09:53:39.322: INFO: Ensure that both old replica sets have no replicas [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59 Aug 23 09:53:39.663: INFO: Deployment "test-rollover-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment,GenerateName:,Namespace:e2e-tests-deployment-ffhjx,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-ffhjx/deployments/test-rollover-deployment,UID:797c767e-e526-11ea-a485-0242ac120004,ResourceVersion:1683042,Generation:2,CreationTimestamp:2020-08-23 09:53:19 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-08-23 09:53:19 +0000 UTC 2020-08-23 09:53:19 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-08-23 09:53:38 +0000 UTC 2020-08-23 09:53:19 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rollover-deployment-5b8479fdb6" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},} Aug 23 09:53:39.669: INFO: New ReplicaSet "test-rollover-deployment-5b8479fdb6" of Deployment "test-rollover-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-5b8479fdb6,GenerateName:,Namespace:e2e-tests-deployment-ffhjx,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-ffhjx/replicasets/test-rollover-deployment-5b8479fdb6,UID:7b528475-e526-11ea-a485-0242ac120004,ResourceVersion:1683032,Generation:2,CreationTimestamp:2020-08-23 09:53:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 797c767e-e526-11ea-a485-0242ac120004 0xc001b82707 0xc001b82708}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} Aug 23 09:53:39.669: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": Aug 23 09:53:39.669: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-controller,GenerateName:,Namespace:e2e-tests-deployment-ffhjx,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-ffhjx/replicasets/test-rollover-controller,UID:754b2b02-e526-11ea-a485-0242ac120004,ResourceVersion:1683041,Generation:2,CreationTimestamp:2020-08-23 09:53:12 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 797c767e-e526-11ea-a485-0242ac120004 0xc001b82517 0xc001b82518}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Aug 23 09:53:39.669: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-58494b7559,GenerateName:,Namespace:e2e-tests-deployment-ffhjx,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-ffhjx/replicasets/test-rollover-deployment-58494b7559,UID:7980531f-e526-11ea-a485-0242ac120004,ResourceVersion:1682996,Generation:2,CreationTimestamp:2020-08-23 09:53:19 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 58494b7559,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 797c767e-e526-11ea-a485-0242ac120004 0xc001b825d7 0xc001b825d8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 58494b7559,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 58494b7559,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Aug 23 09:53:39.672: INFO: Pod "test-rollover-deployment-5b8479fdb6-5crht" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-5b8479fdb6-5crht,GenerateName:test-rollover-deployment-5b8479fdb6-,Namespace:e2e-tests-deployment-ffhjx,SelfLink:/api/v1/namespaces/e2e-tests-deployment-ffhjx/pods/test-rollover-deployment-5b8479fdb6-5crht,UID:7b803e92-e526-11ea-a485-0242ac120004,ResourceVersion:1683011,Generation:0,CreationTimestamp:2020-08-23 09:53:23 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rollover-deployment-5b8479fdb6 7b528475-e526-11ea-a485-0242ac120004 0xc001b836e7 0xc001b836e8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-wr5vh {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-wr5vh,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-wr5vh true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001b83760} {node.kubernetes.io/unreachable Exists NoExecute 0xc001b83780}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-23 09:53:23 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-08-23 09:53:27 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-08-23 09:53:27 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-23 09:53:23 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.2,PodIP:10.244.1.80,StartTime:2020-08-23 09:53:23 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-08-23 09:53:26 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 containerd://519017410f28f2ea2c817a0ff381cf74c0ce5f55d95b1ed5ff3071af51ed5a71}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 23 09:53:39.672: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-deployment-ffhjx" for this suite. Aug 23 09:53:47.715: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 23 09:53:47.872: INFO: namespace: e2e-tests-deployment-ffhjx, resource: bindings, ignored listing per whitelist Aug 23 09:53:47.909: INFO: namespace e2e-tests-deployment-ffhjx deletion completed in 8.234680287s • [SLOW TEST:35.258 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-network] DNS should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 23 09:53:47.909: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default;check="$$(dig +tcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;test -n "$$(getent hosts dns-querier-1.dns-test-service.e2e-tests-dns-r245b.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-r245b.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-r245b.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default;check="$$(dig +tcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;test -n "$$(getent hosts dns-querier-1.dns-test-service.e2e-tests-dns-r245b.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-r245b.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-r245b.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Aug 23 09:53:58.408: INFO: DNS probes using e2e-tests-dns-r245b/dns-test-8a6d0177-e526-11ea-87d5-0242ac11000a succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 23 09:53:58.447: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-dns-r245b" for this suite. Aug 23 09:54:04.467: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 23 09:54:04.497: INFO: namespace: e2e-tests-dns-r245b, resource: bindings, ignored listing per whitelist Aug 23 09:54:04.542: INFO: namespace e2e-tests-dns-r245b deletion completed in 6.087033905s • [SLOW TEST:16.633 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 23 09:54:04.542: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102 [It] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Aug 23 09:54:04.688: INFO: Creating daemon "daemon-set" with a node selector STEP: Initially, daemon pods should not be running on any nodes. Aug 23 09:54:04.697: INFO: Number of nodes with available pods: 0 Aug 23 09:54:04.697: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Change node label to blue, check that daemon pod is launched. Aug 23 09:54:04.786: INFO: Number of nodes with available pods: 0 Aug 23 09:54:04.786: INFO: Node hunter-worker is running more than one daemon pod Aug 23 09:54:05.908: INFO: Number of nodes with available pods: 0 Aug 23 09:54:05.908: INFO: Node hunter-worker is running more than one daemon pod Aug 23 09:54:06.790: INFO: Number of nodes with available pods: 0 Aug 23 09:54:06.790: INFO: Node hunter-worker is running more than one daemon pod Aug 23 09:54:07.790: INFO: Number of nodes with available pods: 0 Aug 23 09:54:07.790: INFO: Node hunter-worker is running more than one daemon pod Aug 23 09:54:08.791: INFO: Number of nodes with available pods: 1 Aug 23 09:54:08.791: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Update the node label to green, and wait for daemons to be unscheduled Aug 23 09:54:08.854: INFO: Number of nodes with available pods: 1 Aug 23 09:54:08.854: INFO: Number of running nodes: 0, number of available pods: 1 Aug 23 09:54:09.895: INFO: Number of nodes with available pods: 0 Aug 23 09:54:09.896: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate Aug 23 09:54:10.370: INFO: Number of nodes with available pods: 0 Aug 23 09:54:10.370: INFO: Node hunter-worker is running more than one daemon pod Aug 23 09:54:11.375: INFO: Number of nodes with available pods: 0 Aug 23 09:54:11.375: INFO: Node hunter-worker is running more than one daemon pod Aug 23 09:54:12.374: INFO: Number of nodes with available pods: 0 Aug 23 09:54:12.374: INFO: Node hunter-worker is running more than one daemon pod Aug 23 09:54:13.374: INFO: Number of nodes with available pods: 0 Aug 23 09:54:13.374: INFO: Node hunter-worker is running more than one daemon pod Aug 23 09:54:14.392: INFO: Number of nodes with available pods: 0 Aug 23 09:54:14.392: INFO: Node hunter-worker is running more than one daemon pod Aug 23 09:54:15.374: INFO: Number of nodes with available pods: 0 Aug 23 09:54:15.374: INFO: Node hunter-worker is running more than one daemon pod Aug 23 09:54:16.374: INFO: Number of nodes with available pods: 0 Aug 23 09:54:16.374: INFO: Node hunter-worker is running more than one daemon pod Aug 23 09:54:17.374: INFO: Number of nodes with available pods: 0 Aug 23 09:54:17.375: INFO: Node hunter-worker is running more than one daemon pod Aug 23 09:54:18.470: INFO: Number of nodes with available pods: 0 Aug 23 09:54:18.470: INFO: Node hunter-worker is running more than one daemon pod Aug 23 09:54:19.620: INFO: Number of nodes with available pods: 0 Aug 23 09:54:19.620: INFO: Node hunter-worker is running more than one daemon pod Aug 23 09:54:20.375: INFO: Number of nodes with available pods: 0 Aug 23 09:54:20.375: INFO: Node hunter-worker is running more than one daemon pod Aug 23 09:54:21.537: INFO: Number of nodes with available pods: 0 Aug 23 09:54:21.537: INFO: Node hunter-worker is running more than one daemon pod Aug 23 09:54:22.374: INFO: Number of nodes with available pods: 0 Aug 23 09:54:22.374: INFO: Node hunter-worker is running more than one daemon pod Aug 23 09:54:23.375: INFO: Number of nodes with available pods: 1 Aug 23 09:54:23.375: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-6jjqg, will wait for the garbage collector to delete the pods Aug 23 09:54:23.441: INFO: Deleting DaemonSet.extensions daemon-set took: 6.538664ms Aug 23 09:54:23.541: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.235767ms Aug 23 09:54:38.404: INFO: Number of nodes with available pods: 0 Aug 23 09:54:38.404: INFO: Number of running nodes: 0, number of available pods: 0 Aug 23 09:54:38.410: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-6jjqg/daemonsets","resourceVersion":"1683282"},"items":null} Aug 23 09:54:38.412: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-6jjqg/pods","resourceVersion":"1683282"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 23 09:54:38.500: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-daemonsets-6jjqg" for this suite. Aug 23 09:54:44.618: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 23 09:54:44.684: INFO: namespace: e2e-tests-daemonsets-6jjqg, resource: bindings, ignored listing per whitelist Aug 23 09:54:44.684: INFO: namespace e2e-tests-daemonsets-6jjqg deletion completed in 6.180542955s • [SLOW TEST:40.142 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-network] Services should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 23 09:54:44.684: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:85 [It] should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 23 09:54:44.821: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-services-g7ssk" for this suite. Aug 23 09:54:50.831: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 23 09:54:50.846: INFO: namespace: e2e-tests-services-g7ssk, resource: bindings, ignored listing per whitelist Aug 23 09:54:50.895: INFO: namespace e2e-tests-services-g7ssk deletion completed in 6.072363926s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:90 • [SLOW TEST:6.211 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 23 09:54:50.896: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the rc STEP: delete the rc STEP: wait for all pods to be garbage collected STEP: Gathering metrics W0823 09:55:01.023308 6 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Aug 23 09:55:01.023: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 23 09:55:01.023: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-rwww6" for this suite. Aug 23 09:55:07.037: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 23 09:55:07.063: INFO: namespace: e2e-tests-gc-rwww6, resource: bindings, ignored listing per whitelist Aug 23 09:55:07.114: INFO: namespace e2e-tests-gc-rwww6 deletion completed in 6.088561273s • [SLOW TEST:16.218 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 23 09:55:07.115: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the rc1 STEP: create the rc2 STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well STEP: delete the rc simpletest-rc-to-be-deleted STEP: wait for the rc to be deleted Aug 23 09:55:19.984: INFO: 5 pods remaining Aug 23 09:55:19.984: INFO: 5 pods has nil DeletionTimestamp Aug 23 09:55:19.984: INFO: STEP: Gathering metrics W0823 09:55:25.309448 6 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Aug 23 09:55:25.309: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 23 09:55:25.309: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-d6klt" for this suite. Aug 23 09:55:34.515: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 23 09:55:34.543: INFO: namespace: e2e-tests-gc-d6klt, resource: bindings, ignored listing per whitelist Aug 23 09:55:34.567: INFO: namespace e2e-tests-gc-d6klt deletion completed in 9.090662071s • [SLOW TEST:27.453 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 23 09:55:34.567: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the rs STEP: Gathering metrics W0823 09:56:05.235088 6 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Aug 23 09:56:05.235: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 23 09:56:05.235: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-g2km6" for this suite. Aug 23 09:56:13.255: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 23 09:56:13.271: INFO: namespace: e2e-tests-gc-g2km6, resource: bindings, ignored listing per whitelist Aug 23 09:56:13.336: INFO: namespace e2e-tests-gc-g2km6 deletion completed in 8.097884406s • [SLOW TEST:38.768 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 23 09:56:13.336: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-map-e116d8c8-e526-11ea-87d5-0242ac11000a STEP: Creating a pod to test consume configMaps Aug 23 09:56:13.797: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-e11ac031-e526-11ea-87d5-0242ac11000a" in namespace "e2e-tests-projected-p5fvl" to be "success or failure" Aug 23 09:56:13.859: INFO: Pod "pod-projected-configmaps-e11ac031-e526-11ea-87d5-0242ac11000a": Phase="Pending", Reason="", readiness=false. Elapsed: 62.10881ms Aug 23 09:56:16.019: INFO: Pod "pod-projected-configmaps-e11ac031-e526-11ea-87d5-0242ac11000a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.222378381s Aug 23 09:56:18.022: INFO: Pod "pod-projected-configmaps-e11ac031-e526-11ea-87d5-0242ac11000a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.225315987s STEP: Saw pod success Aug 23 09:56:18.022: INFO: Pod "pod-projected-configmaps-e11ac031-e526-11ea-87d5-0242ac11000a" satisfied condition "success or failure" Aug 23 09:56:18.024: INFO: Trying to get logs from node hunter-worker2 pod pod-projected-configmaps-e11ac031-e526-11ea-87d5-0242ac11000a container projected-configmap-volume-test: STEP: delete the pod Aug 23 09:56:18.060: INFO: Waiting for pod pod-projected-configmaps-e11ac031-e526-11ea-87d5-0242ac11000a to disappear Aug 23 09:56:18.069: INFO: Pod pod-projected-configmaps-e11ac031-e526-11ea-87d5-0242ac11000a no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 23 09:56:18.070: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-p5fvl" for this suite. Aug 23 09:56:24.365: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 23 09:56:24.379: INFO: namespace: e2e-tests-projected-p5fvl, resource: bindings, ignored listing per whitelist Aug 23 09:56:24.598: INFO: namespace e2e-tests-projected-p5fvl deletion completed in 6.525935543s • [SLOW TEST:11.262 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 23 09:56:24.598: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with secret that has name projected-secret-test-e7c1c8f4-e526-11ea-87d5-0242ac11000a STEP: Creating a pod to test consume secrets Aug 23 09:56:24.982: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-e7d656fc-e526-11ea-87d5-0242ac11000a" in namespace "e2e-tests-projected-bddlv" to be "success or failure" Aug 23 09:56:25.130: INFO: Pod "pod-projected-secrets-e7d656fc-e526-11ea-87d5-0242ac11000a": Phase="Pending", Reason="", readiness=false. Elapsed: 148.329037ms Aug 23 09:56:27.134: INFO: Pod "pod-projected-secrets-e7d656fc-e526-11ea-87d5-0242ac11000a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.151829917s Aug 23 09:56:29.138: INFO: Pod "pod-projected-secrets-e7d656fc-e526-11ea-87d5-0242ac11000a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.156023877s Aug 23 09:56:31.412: INFO: Pod "pod-projected-secrets-e7d656fc-e526-11ea-87d5-0242ac11000a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.429704219s Aug 23 09:56:33.416: INFO: Pod "pod-projected-secrets-e7d656fc-e526-11ea-87d5-0242ac11000a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.43398667s STEP: Saw pod success Aug 23 09:56:33.416: INFO: Pod "pod-projected-secrets-e7d656fc-e526-11ea-87d5-0242ac11000a" satisfied condition "success or failure" Aug 23 09:56:33.419: INFO: Trying to get logs from node hunter-worker pod pod-projected-secrets-e7d656fc-e526-11ea-87d5-0242ac11000a container projected-secret-volume-test: STEP: delete the pod Aug 23 09:56:33.857: INFO: Waiting for pod pod-projected-secrets-e7d656fc-e526-11ea-87d5-0242ac11000a to disappear Aug 23 09:56:34.052: INFO: Pod pod-projected-secrets-e7d656fc-e526-11ea-87d5-0242ac11000a no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 23 09:56:34.052: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-bddlv" for this suite. Aug 23 09:56:40.076: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 23 09:56:40.121: INFO: namespace: e2e-tests-projected-bddlv, resource: bindings, ignored listing per whitelist Aug 23 09:56:40.146: INFO: namespace e2e-tests-projected-bddlv deletion completed in 6.091549943s • [SLOW TEST:15.548 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 23 09:56:40.146: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a new configmap STEP: modifying the configmap once STEP: modifying the configmap a second time STEP: deleting the configmap STEP: creating a watch on configmaps from the resource version returned by the first update STEP: Expecting to observe notifications for all changes to the configmap after the first update Aug 23 09:56:40.379: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:e2e-tests-watch-pdspp,SelfLink:/api/v1/namespaces/e2e-tests-watch-pdspp/configmaps/e2e-watch-test-resource-version,UID:f0fbd0cf-e526-11ea-a485-0242ac120004,ResourceVersion:1683871,Generation:0,CreationTimestamp:2020-08-23 09:56:40 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Aug 23 09:56:40.380: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:e2e-tests-watch-pdspp,SelfLink:/api/v1/namespaces/e2e-tests-watch-pdspp/configmaps/e2e-watch-test-resource-version,UID:f0fbd0cf-e526-11ea-a485-0242ac120004,ResourceVersion:1683872,Generation:0,CreationTimestamp:2020-08-23 09:56:40 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Aug 23 09:56:40.380: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-watch-pdspp" for this suite. Aug 23 09:56:46.407: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 23 09:56:46.415: INFO: namespace: e2e-tests-watch-pdspp, resource: bindings, ignored listing per whitelist Aug 23 09:56:46.466: INFO: namespace e2e-tests-watch-pdspp deletion completed in 6.072794155s • [SLOW TEST:6.320 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Aug 23 09:56:46.466: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Aug 23 09:56:46.579: INFO: (0) /api/v1/nodes/hunter-worker/proxy/logs/:
alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/
>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-f89a5616-e526-11ea-87d5-0242ac11000a
STEP: Creating a pod to test consume configMaps
Aug 23 09:56:53.079: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-f89c0bcb-e526-11ea-87d5-0242ac11000a" in namespace "e2e-tests-projected-dx9ws" to be "success or failure"
Aug 23 09:56:53.112: INFO: Pod "pod-projected-configmaps-f89c0bcb-e526-11ea-87d5-0242ac11000a": Phase="Pending", Reason="", readiness=false. Elapsed: 33.433665ms
Aug 23 09:56:55.244: INFO: Pod "pod-projected-configmaps-f89c0bcb-e526-11ea-87d5-0242ac11000a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.164861902s
Aug 23 09:56:57.247: INFO: Pod "pod-projected-configmaps-f89c0bcb-e526-11ea-87d5-0242ac11000a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.168010172s
STEP: Saw pod success
Aug 23 09:56:57.247: INFO: Pod "pod-projected-configmaps-f89c0bcb-e526-11ea-87d5-0242ac11000a" satisfied condition "success or failure"
Aug 23 09:56:57.249: INFO: Trying to get logs from node hunter-worker2 pod pod-projected-configmaps-f89c0bcb-e526-11ea-87d5-0242ac11000a container projected-configmap-volume-test: 
STEP: delete the pod
Aug 23 09:56:57.634: INFO: Waiting for pod pod-projected-configmaps-f89c0bcb-e526-11ea-87d5-0242ac11000a to disappear
Aug 23 09:56:57.664: INFO: Pod pod-projected-configmaps-f89c0bcb-e526-11ea-87d5-0242ac11000a no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 23 09:56:57.664: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-dx9ws" for this suite.
Aug 23 09:57:04.128: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 23 09:57:04.273: INFO: namespace: e2e-tests-projected-dx9ws, resource: bindings, ignored listing per whitelist
Aug 23 09:57:04.292: INFO: namespace e2e-tests-projected-dx9ws deletion completed in 6.353358321s

• [SLOW TEST:11.378 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl expose 
  should create services for rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 23 09:57:04.292: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should create services for rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating Redis RC
Aug 23 09:57:04.410: INFO: namespace e2e-tests-kubectl-zkt8q
Aug 23 09:57:04.410: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-zkt8q'
Aug 23 09:57:04.943: INFO: stderr: ""
Aug 23 09:57:04.943: INFO: stdout: "replicationcontroller/redis-master created\n"
STEP: Waiting for Redis master to start.
Aug 23 09:57:05.946: INFO: Selector matched 1 pods for map[app:redis]
Aug 23 09:57:05.946: INFO: Found 0 / 1
Aug 23 09:57:07.064: INFO: Selector matched 1 pods for map[app:redis]
Aug 23 09:57:07.064: INFO: Found 0 / 1
Aug 23 09:57:07.945: INFO: Selector matched 1 pods for map[app:redis]
Aug 23 09:57:07.945: INFO: Found 0 / 1
Aug 23 09:57:08.998: INFO: Selector matched 1 pods for map[app:redis]
Aug 23 09:57:08.998: INFO: Found 0 / 1
Aug 23 09:57:09.946: INFO: Selector matched 1 pods for map[app:redis]
Aug 23 09:57:09.946: INFO: Found 1 / 1
Aug 23 09:57:09.946: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Aug 23 09:57:09.949: INFO: Selector matched 1 pods for map[app:redis]
Aug 23 09:57:09.949: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Aug 23 09:57:09.949: INFO: wait on redis-master startup in e2e-tests-kubectl-zkt8q 
Aug 23 09:57:09.949: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-cpl5c redis-master --namespace=e2e-tests-kubectl-zkt8q'
Aug 23 09:57:10.047: INFO: stderr: ""
Aug 23 09:57:10.047: INFO: stdout: "                _._                                                  \n           _.-``__ ''-._                                             \n      _.-``    `.  `_.  ''-._           Redis 3.2.12 (35a5711f/0) 64 bit\n  .-`` .-```.  ```\\/    _.,_ ''-._                                   \n (    '      ,       .-`  | `,    )     Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379\n |    `-._   `._    /     _.-'    |     PID: 1\n  `-._    `-._  `-./  _.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |           http://redis.io        \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |                                  \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n      `-._    `-.__.-'    _.-'                                       \n          `-._        _.-'                                           \n              `-.__.-'                                               \n\n1:M 23 Aug 09:57:08.082 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 23 Aug 09:57:08.082 # Server started, Redis version 3.2.12\n1:M 23 Aug 09:57:08.082 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 23 Aug 09:57:08.082 * The server is now ready to accept connections on port 6379\n"
STEP: exposing RC
Aug 23 09:57:10.047: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose rc redis-master --name=rm2 --port=1234 --target-port=6379 --namespace=e2e-tests-kubectl-zkt8q'
Aug 23 09:57:10.502: INFO: stderr: ""
Aug 23 09:57:10.502: INFO: stdout: "service/rm2 exposed\n"
Aug 23 09:57:10.581: INFO: Service rm2 in namespace e2e-tests-kubectl-zkt8q found.
STEP: exposing service
Aug 23 09:57:12.585: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=e2e-tests-kubectl-zkt8q'
Aug 23 09:57:12.719: INFO: stderr: ""
Aug 23 09:57:12.719: INFO: stdout: "service/rm3 exposed\n"
Aug 23 09:57:12.733: INFO: Service rm3 in namespace e2e-tests-kubectl-zkt8q found.
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 23 09:57:14.738: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-zkt8q" for this suite.
Aug 23 09:57:38.785: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 23 09:57:38.819: INFO: namespace: e2e-tests-kubectl-zkt8q, resource: bindings, ignored listing per whitelist
Aug 23 09:57:38.857: INFO: namespace e2e-tests-kubectl-zkt8q deletion completed in 24.116151521s

• [SLOW TEST:34.565 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl expose
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create services for rc  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-apps] Deployment 
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 23 09:57:38.857: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65
[It] RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Aug 23 09:57:38.942: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted)
Aug 23 09:57:38.981: INFO: Pod name sample-pod: Found 0 pods out of 1
Aug 23 09:57:43.987: INFO: Pod name sample-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Aug 23 09:57:43.987: INFO: Creating deployment "test-rolling-update-deployment"
Aug 23 09:57:43.991: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has
Aug 23 09:57:44.029: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created
Aug 23 09:57:46.098: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected
Aug 23 09:57:46.101: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733773464, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733773464, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733773464, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733773464, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 23 09:57:48.104: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733773464, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733773464, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733773464, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733773464, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 23 09:57:50.104: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted)
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59
Aug 23 09:57:50.111: INFO: Deployment "test-rolling-update-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment,GenerateName:,Namespace:e2e-tests-deployment-jz6sj,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-jz6sj/deployments/test-rolling-update-deployment,UID:16f576f6-e527-11ea-a485-0242ac120004,ResourceVersion:1684133,Generation:1,CreationTimestamp:2020-08-23 09:57:43 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-08-23 09:57:44 +0000 UTC 2020-08-23 09:57:44 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-08-23 09:57:48 +0000 UTC 2020-08-23 09:57:44 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rolling-update-deployment-75db98fb4c" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},}

Aug 23 09:57:50.115: INFO: New ReplicaSet "test-rolling-update-deployment-75db98fb4c" of Deployment "test-rolling-update-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-75db98fb4c,GenerateName:,Namespace:e2e-tests-deployment-jz6sj,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-jz6sj/replicasets/test-rolling-update-deployment-75db98fb4c,UID:16fdedf4-e527-11ea-a485-0242ac120004,ResourceVersion:1684124,Generation:1,CreationTimestamp:2020-08-23 09:57:44 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment 16f576f6-e527-11ea-a485-0242ac120004 0xc00269ceb7 0xc00269ceb8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},}
Aug 23 09:57:50.115: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment":
Aug 23 09:57:50.115: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-controller,GenerateName:,Namespace:e2e-tests-deployment-jz6sj,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-jz6sj/replicasets/test-rolling-update-controller,UID:13f37e29-e527-11ea-a485-0242ac120004,ResourceVersion:1684132,Generation:2,CreationTimestamp:2020-08-23 09:57:38 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305832,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment 16f576f6-e527-11ea-a485-0242ac120004 0xc00269cdf7 0xc00269cdf8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Aug 23 09:57:50.119: INFO: Pod "test-rolling-update-deployment-75db98fb4c-jw9p9" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-75db98fb4c-jw9p9,GenerateName:test-rolling-update-deployment-75db98fb4c-,Namespace:e2e-tests-deployment-jz6sj,SelfLink:/api/v1/namespaces/e2e-tests-deployment-jz6sj/pods/test-rolling-update-deployment-75db98fb4c-jw9p9,UID:16fec871-e527-11ea-a485-0242ac120004,ResourceVersion:1684123,Generation:0,CreationTimestamp:2020-08-23 09:57:44 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rolling-update-deployment-75db98fb4c 16fdedf4-e527-11ea-a485-0242ac120004 0xc00257db27 0xc00257db28}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-9t9w4 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-9t9w4,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [{default-token-9t9w4 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00257dbb0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00257dbd0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-23 09:57:44 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-08-23 09:57:48 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-08-23 09:57:48 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-23 09:57:44 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.2,PodIP:10.244.1.93,StartTime:2020-08-23 09:57:44 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-08-23 09:57:47 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 containerd://65e949df0ae9df296b63decc054aa0686a4102540366e37e65bc8df240bbdedf}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 23 09:57:50.119: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-deployment-jz6sj" for this suite.
Aug 23 09:57:56.160: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 23 09:57:56.384: INFO: namespace: e2e-tests-deployment-jz6sj, resource: bindings, ignored listing per whitelist
Aug 23 09:57:56.414: INFO: namespace e2e-tests-deployment-jz6sj deletion completed in 6.291929017s

• [SLOW TEST:17.557 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 23 09:57:56.415: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-upd-1e7df283-e527-11ea-87d5-0242ac11000a
STEP: Creating the pod
STEP: Waiting for pod with text data
STEP: Waiting for pod with binary data
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 23 09:58:08.944: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-rwpqd" for this suite.
Aug 23 09:58:33.081: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 23 09:58:33.123: INFO: namespace: e2e-tests-configmap-rwpqd, resource: bindings, ignored listing per whitelist
Aug 23 09:58:33.147: INFO: namespace e2e-tests-configmap-rwpqd deletion completed in 24.199498299s

• [SLOW TEST:36.732 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 23 09:58:33.147: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Aug 23 09:58:33.313: INFO: Waiting up to 5m0s for pod "downwardapi-volume-34553611-e527-11ea-87d5-0242ac11000a" in namespace "e2e-tests-projected-m8btr" to be "success or failure"
Aug 23 09:58:33.323: INFO: Pod "downwardapi-volume-34553611-e527-11ea-87d5-0242ac11000a": Phase="Pending", Reason="", readiness=false. Elapsed: 9.571453ms
Aug 23 09:58:35.326: INFO: Pod "downwardapi-volume-34553611-e527-11ea-87d5-0242ac11000a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013207054s
Aug 23 09:58:37.330: INFO: Pod "downwardapi-volume-34553611-e527-11ea-87d5-0242ac11000a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.016831233s
Aug 23 09:58:39.335: INFO: Pod "downwardapi-volume-34553611-e527-11ea-87d5-0242ac11000a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.021391057s
Aug 23 09:58:41.340: INFO: Pod "downwardapi-volume-34553611-e527-11ea-87d5-0242ac11000a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.026950673s
STEP: Saw pod success
Aug 23 09:58:41.340: INFO: Pod "downwardapi-volume-34553611-e527-11ea-87d5-0242ac11000a" satisfied condition "success or failure"
Aug 23 09:58:41.343: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-34553611-e527-11ea-87d5-0242ac11000a container client-container: 
STEP: delete the pod
Aug 23 09:58:41.827: INFO: Waiting for pod downwardapi-volume-34553611-e527-11ea-87d5-0242ac11000a to disappear
Aug 23 09:58:41.946: INFO: Pod downwardapi-volume-34553611-e527-11ea-87d5-0242ac11000a no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 23 09:58:41.946: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-m8btr" for this suite.
Aug 23 09:58:47.965: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 23 09:58:47.981: INFO: namespace: e2e-tests-projected-m8btr, resource: bindings, ignored listing per whitelist
Aug 23 09:58:48.038: INFO: namespace e2e-tests-projected-m8btr deletion completed in 6.088642428s

• [SLOW TEST:14.892 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected combined 
  should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected combined
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 23 09:58:48.039: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-projected-all-test-volume-3d3a3c62-e527-11ea-87d5-0242ac11000a
STEP: Creating secret with name secret-projected-all-test-volume-3d3a3c36-e527-11ea-87d5-0242ac11000a
STEP: Creating a pod to test Check all projections for projected volume plugin
Aug 23 09:58:48.221: INFO: Waiting up to 5m0s for pod "projected-volume-3d3a3bc5-e527-11ea-87d5-0242ac11000a" in namespace "e2e-tests-projected-gdjjh" to be "success or failure"
Aug 23 09:58:48.226: INFO: Pod "projected-volume-3d3a3bc5-e527-11ea-87d5-0242ac11000a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.066321ms
Aug 23 09:58:50.230: INFO: Pod "projected-volume-3d3a3bc5-e527-11ea-87d5-0242ac11000a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008105151s
Aug 23 09:58:52.233: INFO: Pod "projected-volume-3d3a3bc5-e527-11ea-87d5-0242ac11000a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.011325061s
Aug 23 09:58:54.238: INFO: Pod "projected-volume-3d3a3bc5-e527-11ea-87d5-0242ac11000a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.016033535s
STEP: Saw pod success
Aug 23 09:58:54.238: INFO: Pod "projected-volume-3d3a3bc5-e527-11ea-87d5-0242ac11000a" satisfied condition "success or failure"
Aug 23 09:58:54.241: INFO: Trying to get logs from node hunter-worker2 pod projected-volume-3d3a3bc5-e527-11ea-87d5-0242ac11000a container projected-all-volume-test: 
STEP: delete the pod
Aug 23 09:58:54.294: INFO: Waiting for pod projected-volume-3d3a3bc5-e527-11ea-87d5-0242ac11000a to disappear
Aug 23 09:58:54.387: INFO: Pod projected-volume-3d3a3bc5-e527-11ea-87d5-0242ac11000a no longer exists
[AfterEach] [sig-storage] Projected combined
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 23 09:58:54.387: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-gdjjh" for this suite.
Aug 23 09:59:02.535: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 23 09:59:02.648: INFO: namespace: e2e-tests-projected-gdjjh, resource: bindings, ignored listing per whitelist
Aug 23 09:59:02.659: INFO: namespace e2e-tests-projected-gdjjh deletion completed in 8.266822225s

• [SLOW TEST:14.620 seconds]
[sig-storage] Projected combined
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_combined.go:31
  should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 23 09:59:02.659: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward api env vars
Aug 23 09:59:02.944: INFO: Waiting up to 5m0s for pod "downward-api-45fe721c-e527-11ea-87d5-0242ac11000a" in namespace "e2e-tests-downward-api-sch58" to be "success or failure"
Aug 23 09:59:03.084: INFO: Pod "downward-api-45fe721c-e527-11ea-87d5-0242ac11000a": Phase="Pending", Reason="", readiness=false. Elapsed: 139.548908ms
Aug 23 09:59:05.088: INFO: Pod "downward-api-45fe721c-e527-11ea-87d5-0242ac11000a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.143567053s
Aug 23 09:59:07.091: INFO: Pod "downward-api-45fe721c-e527-11ea-87d5-0242ac11000a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.147361513s
STEP: Saw pod success
Aug 23 09:59:07.091: INFO: Pod "downward-api-45fe721c-e527-11ea-87d5-0242ac11000a" satisfied condition "success or failure"
Aug 23 09:59:07.094: INFO: Trying to get logs from node hunter-worker pod downward-api-45fe721c-e527-11ea-87d5-0242ac11000a container dapi-container: 
STEP: delete the pod
Aug 23 09:59:07.116: INFO: Waiting for pod downward-api-45fe721c-e527-11ea-87d5-0242ac11000a to disappear
Aug 23 09:59:07.120: INFO: Pod downward-api-45fe721c-e527-11ea-87d5-0242ac11000a no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 23 09:59:07.120: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-sch58" for this suite.
Aug 23 09:59:13.171: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 23 09:59:13.234: INFO: namespace: e2e-tests-downward-api-sch58, resource: bindings, ignored listing per whitelist
Aug 23 09:59:13.248: INFO: namespace e2e-tests-downward-api-sch58 deletion completed in 6.097809112s

• [SLOW TEST:10.589 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38
  should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 23 09:59:13.248: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod liveness-exec in namespace e2e-tests-container-probe-frnbw
Aug 23 09:59:19.973: INFO: Started pod liveness-exec in namespace e2e-tests-container-probe-frnbw
STEP: checking the pod's current state and verifying that restartCount is present
Aug 23 09:59:19.976: INFO: Initial restart count of pod liveness-exec is 0
Aug 23 10:00:10.998: INFO: Restart count of pod e2e-tests-container-probe-frnbw/liveness-exec is now 1 (51.022382641s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 23 10:00:11.090: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-frnbw" for this suite.
Aug 23 10:00:19.118: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 23 10:00:19.246: INFO: namespace: e2e-tests-container-probe-frnbw, resource: bindings, ignored listing per whitelist
Aug 23 10:00:19.264: INFO: namespace e2e-tests-container-probe-frnbw deletion completed in 8.170895376s

• [SLOW TEST:66.016 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 23 10:00:19.265: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0777 on node default medium
Aug 23 10:00:19.366: INFO: Waiting up to 5m0s for pod "pod-738ed403-e527-11ea-87d5-0242ac11000a" in namespace "e2e-tests-emptydir-n7whl" to be "success or failure"
Aug 23 10:00:19.373: INFO: Pod "pod-738ed403-e527-11ea-87d5-0242ac11000a": Phase="Pending", Reason="", readiness=false. Elapsed: 7.634181ms
Aug 23 10:00:21.413: INFO: Pod "pod-738ed403-e527-11ea-87d5-0242ac11000a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.047495533s
Aug 23 10:00:23.424: INFO: Pod "pod-738ed403-e527-11ea-87d5-0242ac11000a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.057991645s
Aug 23 10:00:25.427: INFO: Pod "pod-738ed403-e527-11ea-87d5-0242ac11000a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.061674728s
STEP: Saw pod success
Aug 23 10:00:25.427: INFO: Pod "pod-738ed403-e527-11ea-87d5-0242ac11000a" satisfied condition "success or failure"
Aug 23 10:00:25.430: INFO: Trying to get logs from node hunter-worker2 pod pod-738ed403-e527-11ea-87d5-0242ac11000a container test-container: 
STEP: delete the pod
Aug 23 10:00:25.516: INFO: Waiting for pod pod-738ed403-e527-11ea-87d5-0242ac11000a to disappear
Aug 23 10:00:25.527: INFO: Pod pod-738ed403-e527-11ea-87d5-0242ac11000a no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 23 10:00:25.527: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-n7whl" for this suite.
Aug 23 10:00:31.542: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 23 10:00:31.563: INFO: namespace: e2e-tests-emptydir-n7whl, resource: bindings, ignored listing per whitelist
Aug 23 10:00:31.616: INFO: namespace e2e-tests-emptydir-n7whl deletion completed in 6.084615307s

• [SLOW TEST:12.351 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0777,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-network] Services 
  should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 23 10:00:31.616: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:85
[It] should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating service multi-endpoint-test in namespace e2e-tests-services-l9km2
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-l9km2 to expose endpoints map[]
Aug 23 10:00:31.811: INFO: Get endpoints failed (5.975884ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found
Aug 23 10:00:32.815: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-l9km2 exposes endpoints map[] (1.009905994s elapsed)
STEP: Creating pod pod1 in namespace e2e-tests-services-l9km2
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-l9km2 to expose endpoints map[pod1:[100]]
Aug 23 10:00:36.943: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-l9km2 exposes endpoints map[pod1:[100]] (4.120735861s elapsed)
STEP: Creating pod pod2 in namespace e2e-tests-services-l9km2
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-l9km2 to expose endpoints map[pod1:[100] pod2:[101]]
Aug 23 10:00:41.100: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-l9km2 exposes endpoints map[pod1:[100] pod2:[101]] (4.152748456s elapsed)
STEP: Deleting pod pod1 in namespace e2e-tests-services-l9km2
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-l9km2 to expose endpoints map[pod2:[101]]
Aug 23 10:00:42.185: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-l9km2 exposes endpoints map[pod2:[101]] (1.062463018s elapsed)
STEP: Deleting pod pod2 in namespace e2e-tests-services-l9km2
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-l9km2 to expose endpoints map[]
Aug 23 10:00:43.353: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-l9km2 exposes endpoints map[] (1.162928679s elapsed)
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 23 10:00:43.640: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-services-l9km2" for this suite.
Aug 23 10:00:51.686: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 23 10:00:51.731: INFO: namespace: e2e-tests-services-l9km2, resource: bindings, ignored listing per whitelist
Aug 23 10:00:51.766: INFO: namespace e2e-tests-services-l9km2 deletion completed in 8.114409089s
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:90

• [SLOW TEST:20.150 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-node] Downward API 
  should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 23 10:00:51.766: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward api env vars
Aug 23 10:00:51.943: INFO: Waiting up to 5m0s for pod "downward-api-86f04882-e527-11ea-87d5-0242ac11000a" in namespace "e2e-tests-downward-api-cj488" to be "success or failure"
Aug 23 10:00:51.956: INFO: Pod "downward-api-86f04882-e527-11ea-87d5-0242ac11000a": Phase="Pending", Reason="", readiness=false. Elapsed: 12.352155ms
Aug 23 10:00:54.114: INFO: Pod "downward-api-86f04882-e527-11ea-87d5-0242ac11000a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.170949415s
Aug 23 10:00:56.118: INFO: Pod "downward-api-86f04882-e527-11ea-87d5-0242ac11000a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.175024611s
Aug 23 10:00:58.122: INFO: Pod "downward-api-86f04882-e527-11ea-87d5-0242ac11000a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.178944435s
STEP: Saw pod success
Aug 23 10:00:58.122: INFO: Pod "downward-api-86f04882-e527-11ea-87d5-0242ac11000a" satisfied condition "success or failure"
Aug 23 10:00:58.125: INFO: Trying to get logs from node hunter-worker2 pod downward-api-86f04882-e527-11ea-87d5-0242ac11000a container dapi-container: 
STEP: delete the pod
Aug 23 10:00:58.161: INFO: Waiting for pod downward-api-86f04882-e527-11ea-87d5-0242ac11000a to disappear
Aug 23 10:00:58.222: INFO: Pod downward-api-86f04882-e527-11ea-87d5-0242ac11000a no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 23 10:00:58.222: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-cj488" for this suite.
Aug 23 10:01:04.251: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 23 10:01:04.305: INFO: namespace: e2e-tests-downward-api-cj488, resource: bindings, ignored listing per whitelist
Aug 23 10:01:04.333: INFO: namespace e2e-tests-downward-api-cj488 deletion completed in 6.104084552s

• [SLOW TEST:12.567 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38
  should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-apps] Daemon set [Serial] 
  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 23 10:01:04.333: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102
[It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Aug 23 10:01:04.458: INFO: Creating simple daemon set daemon-set
STEP: Check that daemon pods launch on every node of the cluster.
Aug 23 10:01:04.483: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 23 10:01:04.485: INFO: Number of nodes with available pods: 0
Aug 23 10:01:04.485: INFO: Node hunter-worker is running more than one daemon pod
Aug 23 10:01:05.490: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 23 10:01:05.494: INFO: Number of nodes with available pods: 0
Aug 23 10:01:05.494: INFO: Node hunter-worker is running more than one daemon pod
Aug 23 10:01:06.697: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 23 10:01:06.710: INFO: Number of nodes with available pods: 0
Aug 23 10:01:06.710: INFO: Node hunter-worker is running more than one daemon pod
Aug 23 10:01:07.491: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 23 10:01:07.494: INFO: Number of nodes with available pods: 0
Aug 23 10:01:07.494: INFO: Node hunter-worker is running more than one daemon pod
Aug 23 10:01:08.552: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 23 10:01:08.556: INFO: Number of nodes with available pods: 0
Aug 23 10:01:08.556: INFO: Node hunter-worker is running more than one daemon pod
Aug 23 10:01:09.596: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 23 10:01:09.599: INFO: Number of nodes with available pods: 1
Aug 23 10:01:09.599: INFO: Node hunter-worker2 is running more than one daemon pod
Aug 23 10:01:10.553: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 23 10:01:10.650: INFO: Number of nodes with available pods: 2
Aug 23 10:01:10.650: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Update daemon pods image.
STEP: Check that daemon pods images are updated.
Aug 23 10:01:10.920: INFO: Wrong image for pod: daemon-set-gt6bg. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Aug 23 10:01:10.920: INFO: Wrong image for pod: daemon-set-k5mkv. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Aug 23 10:01:10.927: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 23 10:01:12.031: INFO: Wrong image for pod: daemon-set-gt6bg. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Aug 23 10:01:12.031: INFO: Wrong image for pod: daemon-set-k5mkv. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Aug 23 10:01:12.034: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 23 10:01:12.930: INFO: Wrong image for pod: daemon-set-gt6bg. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Aug 23 10:01:12.930: INFO: Wrong image for pod: daemon-set-k5mkv. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Aug 23 10:01:12.933: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 23 10:01:14.176: INFO: Wrong image for pod: daemon-set-gt6bg. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Aug 23 10:01:14.176: INFO: Wrong image for pod: daemon-set-k5mkv. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Aug 23 10:01:14.176: INFO: Pod daemon-set-k5mkv is not available
Aug 23 10:01:14.343: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 23 10:01:14.970: INFO: Wrong image for pod: daemon-set-gt6bg. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Aug 23 10:01:14.970: INFO: Pod daemon-set-ln5t7 is not available
Aug 23 10:01:15.050: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 23 10:01:15.931: INFO: Wrong image for pod: daemon-set-gt6bg. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Aug 23 10:01:15.931: INFO: Pod daemon-set-ln5t7 is not available
Aug 23 10:01:15.935: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 23 10:01:17.186: INFO: Wrong image for pod: daemon-set-gt6bg. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Aug 23 10:01:17.186: INFO: Pod daemon-set-ln5t7 is not available
Aug 23 10:01:17.190: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 23 10:01:17.932: INFO: Wrong image for pod: daemon-set-gt6bg. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Aug 23 10:01:17.932: INFO: Pod daemon-set-ln5t7 is not available
Aug 23 10:01:17.936: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 23 10:01:18.977: INFO: Wrong image for pod: daemon-set-gt6bg. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Aug 23 10:01:18.993: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 23 10:01:19.931: INFO: Wrong image for pod: daemon-set-gt6bg. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Aug 23 10:01:19.936: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 23 10:01:20.930: INFO: Wrong image for pod: daemon-set-gt6bg. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Aug 23 10:01:20.930: INFO: Pod daemon-set-gt6bg is not available
Aug 23 10:01:20.934: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 23 10:01:21.932: INFO: Wrong image for pod: daemon-set-gt6bg. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Aug 23 10:01:21.932: INFO: Pod daemon-set-gt6bg is not available
Aug 23 10:01:21.937: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 23 10:01:22.945: INFO: Wrong image for pod: daemon-set-gt6bg. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Aug 23 10:01:22.945: INFO: Pod daemon-set-gt6bg is not available
Aug 23 10:01:22.948: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 23 10:01:23.930: INFO: Wrong image for pod: daemon-set-gt6bg. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Aug 23 10:01:23.930: INFO: Pod daemon-set-gt6bg is not available
Aug 23 10:01:23.934: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 23 10:01:24.931: INFO: Wrong image for pod: daemon-set-gt6bg. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Aug 23 10:01:24.931: INFO: Pod daemon-set-gt6bg is not available
Aug 23 10:01:24.933: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 23 10:01:25.931: INFO: Wrong image for pod: daemon-set-gt6bg. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Aug 23 10:01:25.931: INFO: Pod daemon-set-gt6bg is not available
Aug 23 10:01:25.936: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 23 10:01:26.959: INFO: Wrong image for pod: daemon-set-gt6bg. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Aug 23 10:01:26.959: INFO: Pod daemon-set-gt6bg is not available
Aug 23 10:01:26.963: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 23 10:01:27.930: INFO: Wrong image for pod: daemon-set-gt6bg. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Aug 23 10:01:27.930: INFO: Pod daemon-set-gt6bg is not available
Aug 23 10:01:27.933: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 23 10:01:28.931: INFO: Pod daemon-set-gsbfj is not available
Aug 23 10:01:28.935: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
STEP: Check that daemon pods are still running on every node of the cluster.
Aug 23 10:01:28.969: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 23 10:01:28.995: INFO: Number of nodes with available pods: 1
Aug 23 10:01:28.995: INFO: Node hunter-worker2 is running more than one daemon pod
Aug 23 10:01:29.999: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 23 10:01:30.001: INFO: Number of nodes with available pods: 1
Aug 23 10:01:30.001: INFO: Node hunter-worker2 is running more than one daemon pod
Aug 23 10:01:32.296: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 23 10:01:33.259: INFO: Number of nodes with available pods: 1
Aug 23 10:01:33.259: INFO: Node hunter-worker2 is running more than one daemon pod
Aug 23 10:01:34.000: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 23 10:01:34.003: INFO: Number of nodes with available pods: 1
Aug 23 10:01:34.003: INFO: Node hunter-worker2 is running more than one daemon pod
Aug 23 10:01:35.000: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 23 10:01:35.004: INFO: Number of nodes with available pods: 1
Aug 23 10:01:35.004: INFO: Node hunter-worker2 is running more than one daemon pod
Aug 23 10:01:36.001: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 23 10:01:36.005: INFO: Number of nodes with available pods: 1
Aug 23 10:01:36.005: INFO: Node hunter-worker2 is running more than one daemon pod
Aug 23 10:01:37.235: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 23 10:01:37.239: INFO: Number of nodes with available pods: 1
Aug 23 10:01:37.239: INFO: Node hunter-worker2 is running more than one daemon pod
Aug 23 10:01:38.000: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 23 10:01:38.003: INFO: Number of nodes with available pods: 1
Aug 23 10:01:38.003: INFO: Node hunter-worker2 is running more than one daemon pod
Aug 23 10:01:39.206: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 23 10:01:39.210: INFO: Number of nodes with available pods: 1
Aug 23 10:01:39.210: INFO: Node hunter-worker2 is running more than one daemon pod
Aug 23 10:01:40.199: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 23 10:01:40.202: INFO: Number of nodes with available pods: 1
Aug 23 10:01:40.202: INFO: Node hunter-worker2 is running more than one daemon pod
Aug 23 10:01:41.044: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 23 10:01:41.047: INFO: Number of nodes with available pods: 1
Aug 23 10:01:41.047: INFO: Node hunter-worker2 is running more than one daemon pod
Aug 23 10:01:42.629: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 23 10:01:42.662: INFO: Number of nodes with available pods: 2
Aug 23 10:01:42.662: INFO: Number of running nodes: 2, number of available pods: 2
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-gb7bw, will wait for the garbage collector to delete the pods
Aug 23 10:01:43.441: INFO: Deleting DaemonSet.extensions daemon-set took: 381.505404ms
Aug 23 10:01:44.241: INFO: Terminating DaemonSet.extensions daemon-set pods took: 800.253608ms
Aug 23 10:01:48.744: INFO: Number of nodes with available pods: 0
Aug 23 10:01:48.744: INFO: Number of running nodes: 0, number of available pods: 0
Aug 23 10:01:48.746: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-gb7bw/daemonsets","resourceVersion":"1684927"},"items":null}

Aug 23 10:01:48.748: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-gb7bw/pods","resourceVersion":"1684927"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 23 10:01:48.757: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-daemonsets-gb7bw" for this suite.
Aug 23 10:02:00.800: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 23 10:02:00.837: INFO: namespace: e2e-tests-daemonsets-gb7bw, resource: bindings, ignored listing per whitelist
Aug 23 10:02:00.859: INFO: namespace e2e-tests-daemonsets-gb7bw deletion completed in 12.099230431s

• [SLOW TEST:56.526 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should be possible to delete [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 23 10:02:00.859: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[BeforeEach] when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81
[It] should be possible to delete [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 23 10:02:02.885: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubelet-test-76glz" for this suite.
Aug 23 10:02:11.110: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 23 10:02:11.142: INFO: namespace: e2e-tests-kubelet-test-76glz, resource: bindings, ignored listing per whitelist
Aug 23 10:02:11.187: INFO: namespace e2e-tests-kubelet-test-76glz deletion completed in 8.132871s

• [SLOW TEST:10.328 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78
    should be possible to delete [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: http [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 23 10:02:11.188: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for intra-pod communication: http [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-vkssk
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Aug 23 10:02:12.003: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Aug 23 10:02:43.292: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.101:8080/dial?request=hostName&protocol=http&host=10.244.2.100&port=8080&tries=1'] Namespace:e2e-tests-pod-network-test-vkssk PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 23 10:02:43.292: INFO: >>> kubeConfig: /root/.kube/config
I0823 10:02:43.321177       6 log.go:172] (0xc000aa31e0) (0xc0019c9180) Create stream
I0823 10:02:43.321211       6 log.go:172] (0xc000aa31e0) (0xc0019c9180) Stream added, broadcasting: 1
I0823 10:02:43.322870       6 log.go:172] (0xc000aa31e0) Reply frame received for 1
I0823 10:02:43.322916       6 log.go:172] (0xc000aa31e0) (0xc001791cc0) Create stream
I0823 10:02:43.322928       6 log.go:172] (0xc000aa31e0) (0xc001791cc0) Stream added, broadcasting: 3
I0823 10:02:43.323612       6 log.go:172] (0xc000aa31e0) Reply frame received for 3
I0823 10:02:43.323640       6 log.go:172] (0xc000aa31e0) (0xc0019c9220) Create stream
I0823 10:02:43.323650       6 log.go:172] (0xc000aa31e0) (0xc0019c9220) Stream added, broadcasting: 5
I0823 10:02:43.324365       6 log.go:172] (0xc000aa31e0) Reply frame received for 5
I0823 10:02:43.406152       6 log.go:172] (0xc000aa31e0) Data frame received for 3
I0823 10:02:43.406191       6 log.go:172] (0xc001791cc0) (3) Data frame handling
I0823 10:02:43.406209       6 log.go:172] (0xc001791cc0) (3) Data frame sent
I0823 10:02:43.406991       6 log.go:172] (0xc000aa31e0) Data frame received for 3
I0823 10:02:43.407014       6 log.go:172] (0xc001791cc0) (3) Data frame handling
I0823 10:02:43.407267       6 log.go:172] (0xc000aa31e0) Data frame received for 5
I0823 10:02:43.407297       6 log.go:172] (0xc0019c9220) (5) Data frame handling
I0823 10:02:43.409302       6 log.go:172] (0xc000aa31e0) Data frame received for 1
I0823 10:02:43.409328       6 log.go:172] (0xc0019c9180) (1) Data frame handling
I0823 10:02:43.409346       6 log.go:172] (0xc0019c9180) (1) Data frame sent
I0823 10:02:43.409360       6 log.go:172] (0xc000aa31e0) (0xc0019c9180) Stream removed, broadcasting: 1
I0823 10:02:43.409376       6 log.go:172] (0xc000aa31e0) Go away received
I0823 10:02:43.409588       6 log.go:172] (0xc000aa31e0) (0xc0019c9180) Stream removed, broadcasting: 1
I0823 10:02:43.409633       6 log.go:172] (0xc000aa31e0) (0xc001791cc0) Stream removed, broadcasting: 3
I0823 10:02:43.409646       6 log.go:172] (0xc000aa31e0) (0xc0019c9220) Stream removed, broadcasting: 5
Aug 23 10:02:43.409: INFO: Waiting for endpoints: map[]
Aug 23 10:02:43.413: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.101:8080/dial?request=hostName&protocol=http&host=10.244.1.100&port=8080&tries=1'] Namespace:e2e-tests-pod-network-test-vkssk PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 23 10:02:43.413: INFO: >>> kubeConfig: /root/.kube/config
I0823 10:02:43.442304       6 log.go:172] (0xc000b126e0) (0xc0009f1e00) Create stream
I0823 10:02:43.442333       6 log.go:172] (0xc000b126e0) (0xc0009f1e00) Stream added, broadcasting: 1
I0823 10:02:43.446444       6 log.go:172] (0xc000b126e0) Reply frame received for 1
I0823 10:02:43.446497       6 log.go:172] (0xc000b126e0) (0xc0020ba1e0) Create stream
I0823 10:02:43.446516       6 log.go:172] (0xc000b126e0) (0xc0020ba1e0) Stream added, broadcasting: 3
I0823 10:02:43.447865       6 log.go:172] (0xc000b126e0) Reply frame received for 3
I0823 10:02:43.447906       6 log.go:172] (0xc000b126e0) (0xc0009f1ea0) Create stream
I0823 10:02:43.447922       6 log.go:172] (0xc000b126e0) (0xc0009f1ea0) Stream added, broadcasting: 5
I0823 10:02:43.449354       6 log.go:172] (0xc000b126e0) Reply frame received for 5
I0823 10:02:43.526971       6 log.go:172] (0xc000b126e0) Data frame received for 3
I0823 10:02:43.526996       6 log.go:172] (0xc0020ba1e0) (3) Data frame handling
I0823 10:02:43.527015       6 log.go:172] (0xc0020ba1e0) (3) Data frame sent
I0823 10:02:43.527516       6 log.go:172] (0xc000b126e0) Data frame received for 3
I0823 10:02:43.527530       6 log.go:172] (0xc0020ba1e0) (3) Data frame handling
I0823 10:02:43.527809       6 log.go:172] (0xc000b126e0) Data frame received for 5
I0823 10:02:43.527833       6 log.go:172] (0xc0009f1ea0) (5) Data frame handling
I0823 10:02:43.529053       6 log.go:172] (0xc000b126e0) Data frame received for 1
I0823 10:02:43.529066       6 log.go:172] (0xc0009f1e00) (1) Data frame handling
I0823 10:02:43.529082       6 log.go:172] (0xc0009f1e00) (1) Data frame sent
I0823 10:02:43.529234       6 log.go:172] (0xc000b126e0) (0xc0009f1e00) Stream removed, broadcasting: 1
I0823 10:02:43.529318       6 log.go:172] (0xc000b126e0) (0xc0009f1e00) Stream removed, broadcasting: 1
I0823 10:02:43.529352       6 log.go:172] (0xc000b126e0) (0xc0020ba1e0) Stream removed, broadcasting: 3
I0823 10:02:43.529363       6 log.go:172] (0xc000b126e0) (0xc0009f1ea0) Stream removed, broadcasting: 5
Aug 23 10:02:43.529: INFO: Waiting for endpoints: map[]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 23 10:02:43.529: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0823 10:02:43.529528       6 log.go:172] (0xc000b126e0) Go away received
STEP: Destroying namespace "e2e-tests-pod-network-test-vkssk" for this suite.
Aug 23 10:03:03.675: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 23 10:03:03.720: INFO: namespace: e2e-tests-pod-network-test-vkssk, resource: bindings, ignored listing per whitelist
Aug 23 10:03:03.891: INFO: namespace e2e-tests-pod-network-test-vkssk deletion completed in 20.357946483s

• [SLOW TEST:52.703 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for intra-pod communication: http [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 23 10:03:03.891: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test override command
Aug 23 10:03:04.062: INFO: Waiting up to 5m0s for pod "client-containers-d5baf870-e527-11ea-87d5-0242ac11000a" in namespace "e2e-tests-containers-n6q9f" to be "success or failure"
Aug 23 10:03:04.066: INFO: Pod "client-containers-d5baf870-e527-11ea-87d5-0242ac11000a": Phase="Pending", Reason="", readiness=false. Elapsed: 3.450819ms
Aug 23 10:03:06.070: INFO: Pod "client-containers-d5baf870-e527-11ea-87d5-0242ac11000a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007508805s
Aug 23 10:03:08.079: INFO: Pod "client-containers-d5baf870-e527-11ea-87d5-0242ac11000a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.016642955s
STEP: Saw pod success
Aug 23 10:03:08.079: INFO: Pod "client-containers-d5baf870-e527-11ea-87d5-0242ac11000a" satisfied condition "success or failure"
Aug 23 10:03:08.081: INFO: Trying to get logs from node hunter-worker2 pod client-containers-d5baf870-e527-11ea-87d5-0242ac11000a container test-container: 
STEP: delete the pod
Aug 23 10:03:08.121: INFO: Waiting for pod client-containers-d5baf870-e527-11ea-87d5-0242ac11000a to disappear
Aug 23 10:03:08.163: INFO: Pod client-containers-d5baf870-e527-11ea-87d5-0242ac11000a no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 23 10:03:08.163: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-containers-n6q9f" for this suite.
Aug 23 10:03:14.275: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 23 10:03:14.341: INFO: namespace: e2e-tests-containers-n6q9f, resource: bindings, ignored listing per whitelist
Aug 23 10:03:14.357: INFO: namespace e2e-tests-containers-n6q9f deletion completed in 6.190114642s

• [SLOW TEST:10.466 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[k8s.io] Docker Containers 
  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 23 10:03:14.357: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test use defaults
Aug 23 10:03:14.571: INFO: Waiting up to 5m0s for pod "client-containers-dbfdfd76-e527-11ea-87d5-0242ac11000a" in namespace "e2e-tests-containers-cqlxf" to be "success or failure"
Aug 23 10:03:14.593: INFO: Pod "client-containers-dbfdfd76-e527-11ea-87d5-0242ac11000a": Phase="Pending", Reason="", readiness=false. Elapsed: 21.879348ms
Aug 23 10:03:16.597: INFO: Pod "client-containers-dbfdfd76-e527-11ea-87d5-0242ac11000a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025662912s
Aug 23 10:03:18.601: INFO: Pod "client-containers-dbfdfd76-e527-11ea-87d5-0242ac11000a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.029849374s
STEP: Saw pod success
Aug 23 10:03:18.601: INFO: Pod "client-containers-dbfdfd76-e527-11ea-87d5-0242ac11000a" satisfied condition "success or failure"
Aug 23 10:03:18.604: INFO: Trying to get logs from node hunter-worker pod client-containers-dbfdfd76-e527-11ea-87d5-0242ac11000a container test-container: 
STEP: delete the pod
Aug 23 10:03:18.631: INFO: Waiting for pod client-containers-dbfdfd76-e527-11ea-87d5-0242ac11000a to disappear
Aug 23 10:03:18.635: INFO: Pod client-containers-dbfdfd76-e527-11ea-87d5-0242ac11000a no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 23 10:03:18.635: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-containers-cqlxf" for this suite.
Aug 23 10:03:24.664: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 23 10:03:24.705: INFO: namespace: e2e-tests-containers-cqlxf, resource: bindings, ignored listing per whitelist
Aug 23 10:03:24.740: INFO: namespace e2e-tests-containers-cqlxf deletion completed in 6.101759828s

• [SLOW TEST:10.383 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run deployment 
  should create a deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 23 10:03:24.741: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl run deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1399
[It] should create a deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Aug 23 10:03:24.871: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --generator=deployment/v1beta1 --namespace=e2e-tests-kubectl-lvnpf'
Aug 23 10:03:27.385: INFO: stderr: "kubectl run --generator=deployment/v1beta1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Aug 23 10:03:27.386: INFO: stdout: "deployment.extensions/e2e-test-nginx-deployment created\n"
STEP: verifying the deployment e2e-test-nginx-deployment was created
STEP: verifying the pod controlled by deployment e2e-test-nginx-deployment was created
[AfterEach] [k8s.io] Kubectl run deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1404
Aug 23 10:03:31.439: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=e2e-tests-kubectl-lvnpf'
Aug 23 10:03:31.807: INFO: stderr: ""
Aug 23 10:03:31.807: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 23 10:03:31.807: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-lvnpf" for this suite.
Aug 23 10:03:54.186: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 23 10:03:54.239: INFO: namespace: e2e-tests-kubectl-lvnpf, resource: bindings, ignored listing per whitelist
Aug 23 10:03:54.265: INFO: namespace e2e-tests-kubectl-lvnpf deletion completed in 22.400338267s

• [SLOW TEST:29.524 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl run deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create a deployment from an image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0666,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 23 10:03:54.265: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0666 on node default medium
Aug 23 10:03:54.399: INFO: Waiting up to 5m0s for pod "pod-f3bae063-e527-11ea-87d5-0242ac11000a" in namespace "e2e-tests-emptydir-cgsg9" to be "success or failure"
Aug 23 10:03:54.403: INFO: Pod "pod-f3bae063-e527-11ea-87d5-0242ac11000a": Phase="Pending", Reason="", readiness=false. Elapsed: 3.36473ms
Aug 23 10:03:56.538: INFO: Pod "pod-f3bae063-e527-11ea-87d5-0242ac11000a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.138566718s
Aug 23 10:03:58.543: INFO: Pod "pod-f3bae063-e527-11ea-87d5-0242ac11000a": Phase="Running", Reason="", readiness=true. Elapsed: 4.143278124s
Aug 23 10:04:00.547: INFO: Pod "pod-f3bae063-e527-11ea-87d5-0242ac11000a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.147774967s
STEP: Saw pod success
Aug 23 10:04:00.547: INFO: Pod "pod-f3bae063-e527-11ea-87d5-0242ac11000a" satisfied condition "success or failure"
Aug 23 10:04:00.549: INFO: Trying to get logs from node hunter-worker pod pod-f3bae063-e527-11ea-87d5-0242ac11000a container test-container: 
STEP: delete the pod
Aug 23 10:04:00.620: INFO: Waiting for pod pod-f3bae063-e527-11ea-87d5-0242ac11000a to disappear
Aug 23 10:04:00.631: INFO: Pod pod-f3bae063-e527-11ea-87d5-0242ac11000a no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 23 10:04:00.631: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-cgsg9" for this suite.
Aug 23 10:04:06.646: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 23 10:04:06.691: INFO: namespace: e2e-tests-emptydir-cgsg9, resource: bindings, ignored listing per whitelist
Aug 23 10:04:06.728: INFO: namespace e2e-tests-emptydir-cgsg9 deletion completed in 6.093250915s

• [SLOW TEST:12.463 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0666,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[k8s.io] Pods 
  should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 23 10:04:06.728: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Aug 23 10:04:11.435: INFO: Successfully updated pod "pod-update-fb2842ea-e527-11ea-87d5-0242ac11000a"
STEP: verifying the updated pod is in kubernetes
Aug 23 10:04:11.678: INFO: Pod update OK
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 23 10:04:11.678: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-8f7vd" for this suite.
Aug 23 10:04:35.705: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 23 10:04:35.736: INFO: namespace: e2e-tests-pods-8f7vd, resource: bindings, ignored listing per whitelist
Aug 23 10:04:35.765: INFO: namespace e2e-tests-pods-8f7vd deletion completed in 24.081647382s

• [SLOW TEST:29.038 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 23 10:04:35.766: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Given a Pod with a 'name' label pod-adoption is created
STEP: When a replication controller with a matching selector is created
STEP: Then the orphan pod is adopted
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 23 10:04:49.889: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-replication-controller-427qs" for this suite.
Aug 23 10:05:14.144: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 23 10:05:14.219: INFO: namespace: e2e-tests-replication-controller-427qs, resource: bindings, ignored listing per whitelist
Aug 23 10:05:14.223: INFO: namespace e2e-tests-replication-controller-427qs deletion completed in 24.329944644s

• [SLOW TEST:38.457 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 23 10:05:14.223: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Aug 23 10:05:19.050: INFO: Successfully updated pod "pod-update-activedeadlineseconds-2363996f-e528-11ea-87d5-0242ac11000a"
Aug 23 10:05:19.050: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-2363996f-e528-11ea-87d5-0242ac11000a" in namespace "e2e-tests-pods-mbclf" to be "terminated due to deadline exceeded"
Aug 23 10:05:19.059: INFO: Pod "pod-update-activedeadlineseconds-2363996f-e528-11ea-87d5-0242ac11000a": Phase="Running", Reason="", readiness=true. Elapsed: 8.690584ms
Aug 23 10:05:21.068: INFO: Pod "pod-update-activedeadlineseconds-2363996f-e528-11ea-87d5-0242ac11000a": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.018128099s
Aug 23 10:05:21.068: INFO: Pod "pod-update-activedeadlineseconds-2363996f-e528-11ea-87d5-0242ac11000a" satisfied condition "terminated due to deadline exceeded"
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 23 10:05:21.068: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-mbclf" for this suite.
Aug 23 10:05:27.518: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 23 10:05:27.648: INFO: namespace: e2e-tests-pods-mbclf, resource: bindings, ignored listing per whitelist
Aug 23 10:05:27.655: INFO: namespace e2e-tests-pods-mbclf deletion completed in 6.58192218s

• [SLOW TEST:13.432 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 23 10:05:27.655: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61
STEP: create the container to handle the HTTPGet hook request.
[It] should execute poststart exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the pod with lifecycle hook
STEP: check poststart hook
STEP: delete the pod with lifecycle hook
Aug 23 10:05:42.109: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Aug 23 10:05:42.519: INFO: Pod pod-with-poststart-exec-hook still exists
Aug 23 10:05:44.519: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Aug 23 10:05:44.523: INFO: Pod pod-with-poststart-exec-hook still exists
Aug 23 10:05:46.519: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Aug 23 10:05:46.522: INFO: Pod pod-with-poststart-exec-hook still exists
Aug 23 10:05:48.519: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Aug 23 10:05:48.523: INFO: Pod pod-with-poststart-exec-hook still exists
Aug 23 10:05:50.519: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Aug 23 10:05:50.522: INFO: Pod pod-with-poststart-exec-hook still exists
Aug 23 10:05:52.519: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Aug 23 10:05:52.536: INFO: Pod pod-with-poststart-exec-hook still exists
Aug 23 10:05:54.519: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Aug 23 10:05:54.523: INFO: Pod pod-with-poststart-exec-hook still exists
Aug 23 10:05:56.519: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Aug 23 10:05:56.522: INFO: Pod pod-with-poststart-exec-hook still exists
Aug 23 10:05:58.519: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Aug 23 10:05:58.523: INFO: Pod pod-with-poststart-exec-hook still exists
Aug 23 10:06:00.519: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Aug 23 10:06:00.522: INFO: Pod pod-with-poststart-exec-hook still exists
Aug 23 10:06:02.520: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Aug 23 10:06:02.530: INFO: Pod pod-with-poststart-exec-hook still exists
Aug 23 10:06:04.519: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Aug 23 10:06:04.523: INFO: Pod pod-with-poststart-exec-hook still exists
Aug 23 10:06:06.519: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Aug 23 10:06:07.328: INFO: Pod pod-with-poststart-exec-hook still exists
Aug 23 10:06:08.519: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Aug 23 10:06:08.584: INFO: Pod pod-with-poststart-exec-hook still exists
Aug 23 10:06:10.519: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Aug 23 10:06:10.579: INFO: Pod pod-with-poststart-exec-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 23 10:06:10.579: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-bm5nb" for this suite.
Aug 23 10:06:36.642: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 23 10:06:36.695: INFO: namespace: e2e-tests-container-lifecycle-hook-bm5nb, resource: bindings, ignored listing per whitelist
Aug 23 10:06:36.700: INFO: namespace e2e-tests-container-lifecycle-hook-bm5nb deletion completed in 26.118054439s

• [SLOW TEST:69.045 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40
    should execute poststart exec hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 23 10:06:36.701: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79
Aug 23 10:06:36.982: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Aug 23 10:06:37.649: INFO: Waiting for terminating namespaces to be deleted...
Aug 23 10:06:37.805: INFO: 
Logging pods the kubelet thinks is on node hunter-worker before test
Aug 23 10:06:37.811: INFO: kindnet-kvcmt from kube-system started at 2020-08-15 09:33:02 +0000 UTC (1 container statuses recorded)
Aug 23 10:06:37.811: INFO: 	Container kindnet-cni ready: true, restart count 0
Aug 23 10:06:37.811: INFO: kube-proxy-xm64c from kube-system started at 2020-08-15 09:32:58 +0000 UTC (1 container statuses recorded)
Aug 23 10:06:37.811: INFO: 	Container kube-proxy ready: true, restart count 0
Aug 23 10:06:37.811: INFO: 
Logging pods the kubelet thinks is on node hunter-worker2 before test
Aug 23 10:06:37.815: INFO: kube-proxy-7x47x from kube-system started at 2020-08-15 09:33:02 +0000 UTC (1 container statuses recorded)
Aug 23 10:06:37.815: INFO: 	Container kube-proxy ready: true, restart count 0
Aug 23 10:06:37.815: INFO: kindnet-l4sc5 from kube-system started at 2020-08-15 09:33:02 +0000 UTC (1 container statuses recorded)
Aug 23 10:06:37.815: INFO: 	Container kindnet-cni ready: true, restart count 0
[It] validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Trying to launch a pod without a label to get a node which can launch it.
STEP: Explicitly delete pod here to free the resource it takes.
STEP: Trying to apply a random label on the found node.
STEP: verifying the node has the label kubernetes.io/e2e-58eebcdc-e528-11ea-87d5-0242ac11000a 42
STEP: Trying to relaunch the pod, now with labels.
STEP: removing the label kubernetes.io/e2e-58eebcdc-e528-11ea-87d5-0242ac11000a off the node hunter-worker
STEP: verifying the node doesn't have the label kubernetes.io/e2e-58eebcdc-e528-11ea-87d5-0242ac11000a
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 23 10:06:50.800: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-sched-pred-m6scl" for this suite.
Aug 23 10:07:09.329: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 23 10:07:09.369: INFO: namespace: e2e-tests-sched-pred-m6scl, resource: bindings, ignored listing per whitelist
Aug 23 10:07:09.405: INFO: namespace e2e-tests-sched-pred-m6scl deletion completed in 18.510328263s
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70

• [SLOW TEST:32.705 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22
  validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class 
  should be submitted and removed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 23 10:07:09.406: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods Set QOS Class
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:204
[It] should be submitted and removed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying QOS class is set on the pod
[AfterEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 23 10:07:09.665: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-jpvlq" for this suite.
Aug 23 10:07:31.836: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 23 10:07:31.885: INFO: namespace: e2e-tests-pods-jpvlq, resource: bindings, ignored listing per whitelist
Aug 23 10:07:31.909: INFO: namespace e2e-tests-pods-jpvlq deletion completed in 22.177282182s

• [SLOW TEST:22.503 seconds]
[k8s.io] [sig-node] Pods Extended
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  [k8s.io] Pods Set QOS Class
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should be submitted and removed  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should set DefaultMode on files [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 23 10:07:31.909: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should set DefaultMode on files [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Aug 23 10:07:32.455: INFO: Waiting up to 5m0s for pod "downwardapi-volume-75ab5e34-e528-11ea-87d5-0242ac11000a" in namespace "e2e-tests-projected-v5txc" to be "success or failure"
Aug 23 10:07:32.487: INFO: Pod "downwardapi-volume-75ab5e34-e528-11ea-87d5-0242ac11000a": Phase="Pending", Reason="", readiness=false. Elapsed: 32.275877ms
Aug 23 10:07:34.492: INFO: Pod "downwardapi-volume-75ab5e34-e528-11ea-87d5-0242ac11000a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.037202458s
Aug 23 10:07:36.752: INFO: Pod "downwardapi-volume-75ab5e34-e528-11ea-87d5-0242ac11000a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.296799854s
Aug 23 10:07:38.887: INFO: Pod "downwardapi-volume-75ab5e34-e528-11ea-87d5-0242ac11000a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.431477128s
STEP: Saw pod success
Aug 23 10:07:38.887: INFO: Pod "downwardapi-volume-75ab5e34-e528-11ea-87d5-0242ac11000a" satisfied condition "success or failure"
Aug 23 10:07:38.890: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-75ab5e34-e528-11ea-87d5-0242ac11000a container client-container: 
STEP: delete the pod
Aug 23 10:07:39.025: INFO: Waiting for pod downwardapi-volume-75ab5e34-e528-11ea-87d5-0242ac11000a to disappear
Aug 23 10:07:39.030: INFO: Pod downwardapi-volume-75ab5e34-e528-11ea-87d5-0242ac11000a no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 23 10:07:39.030: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-v5txc" for this suite.
Aug 23 10:07:45.189: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 23 10:07:45.213: INFO: namespace: e2e-tests-projected-v5txc, resource: bindings, ignored listing per whitelist
Aug 23 10:07:45.254: INFO: namespace e2e-tests-projected-v5txc deletion completed in 6.220357101s

• [SLOW TEST:13.344 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should set DefaultMode on files [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 23 10:07:45.254: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Aug 23 10:07:45.877: INFO: Waiting up to 5m0s for pod "downwardapi-volume-7db0af4a-e528-11ea-87d5-0242ac11000a" in namespace "e2e-tests-downward-api-zc2qp" to be "success or failure"
Aug 23 10:07:45.961: INFO: Pod "downwardapi-volume-7db0af4a-e528-11ea-87d5-0242ac11000a": Phase="Pending", Reason="", readiness=false. Elapsed: 84.110137ms
Aug 23 10:07:48.008: INFO: Pod "downwardapi-volume-7db0af4a-e528-11ea-87d5-0242ac11000a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.130606308s
Aug 23 10:07:50.183: INFO: Pod "downwardapi-volume-7db0af4a-e528-11ea-87d5-0242ac11000a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.305896358s
Aug 23 10:07:52.187: INFO: Pod "downwardapi-volume-7db0af4a-e528-11ea-87d5-0242ac11000a": Phase="Running", Reason="", readiness=true. Elapsed: 6.310184171s
Aug 23 10:07:54.219: INFO: Pod "downwardapi-volume-7db0af4a-e528-11ea-87d5-0242ac11000a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.341913598s
STEP: Saw pod success
Aug 23 10:07:54.219: INFO: Pod "downwardapi-volume-7db0af4a-e528-11ea-87d5-0242ac11000a" satisfied condition "success or failure"
Aug 23 10:07:54.222: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-7db0af4a-e528-11ea-87d5-0242ac11000a container client-container: 
STEP: delete the pod
Aug 23 10:07:55.101: INFO: Waiting for pod downwardapi-volume-7db0af4a-e528-11ea-87d5-0242ac11000a to disappear
Aug 23 10:07:55.502: INFO: Pod downwardapi-volume-7db0af4a-e528-11ea-87d5-0242ac11000a no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 23 10:07:55.502: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-zc2qp" for this suite.
Aug 23 10:08:01.881: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 23 10:08:02.161: INFO: namespace: e2e-tests-downward-api-zc2qp, resource: bindings, ignored listing per whitelist
Aug 23 10:08:02.181: INFO: namespace e2e-tests-downward-api-zc2qp deletion completed in 6.675656916s

• [SLOW TEST:16.927 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 23 10:08:02.181: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward api env vars
Aug 23 10:08:02.443: INFO: Waiting up to 5m0s for pod "downward-api-878b1638-e528-11ea-87d5-0242ac11000a" in namespace "e2e-tests-downward-api-9qvrx" to be "success or failure"
Aug 23 10:08:02.482: INFO: Pod "downward-api-878b1638-e528-11ea-87d5-0242ac11000a": Phase="Pending", Reason="", readiness=false. Elapsed: 38.672313ms
Aug 23 10:08:04.662: INFO: Pod "downward-api-878b1638-e528-11ea-87d5-0242ac11000a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.219346276s
Aug 23 10:08:06.666: INFO: Pod "downward-api-878b1638-e528-11ea-87d5-0242ac11000a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.223116338s
Aug 23 10:08:08.866: INFO: Pod "downward-api-878b1638-e528-11ea-87d5-0242ac11000a": Phase="Running", Reason="", readiness=true. Elapsed: 6.423413866s
Aug 23 10:08:10.871: INFO: Pod "downward-api-878b1638-e528-11ea-87d5-0242ac11000a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.427845375s
STEP: Saw pod success
Aug 23 10:08:10.871: INFO: Pod "downward-api-878b1638-e528-11ea-87d5-0242ac11000a" satisfied condition "success or failure"
Aug 23 10:08:10.874: INFO: Trying to get logs from node hunter-worker pod downward-api-878b1638-e528-11ea-87d5-0242ac11000a container dapi-container: 
STEP: delete the pod
Aug 23 10:08:10.991: INFO: Waiting for pod downward-api-878b1638-e528-11ea-87d5-0242ac11000a to disappear
Aug 23 10:08:11.013: INFO: Pod downward-api-878b1638-e528-11ea-87d5-0242ac11000a no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 23 10:08:11.013: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-9qvrx" for this suite.
Aug 23 10:08:17.036: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 23 10:08:17.075: INFO: namespace: e2e-tests-downward-api-9qvrx, resource: bindings, ignored listing per whitelist
Aug 23 10:08:17.099: INFO: namespace e2e-tests-downward-api-9qvrx deletion completed in 6.083092407s

• [SLOW TEST:14.918 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38
  should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 23 10:08:17.099: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-volume-90754016-e528-11ea-87d5-0242ac11000a
STEP: Creating a pod to test consume configMaps
Aug 23 10:08:17.491: INFO: Waiting up to 5m0s for pod "pod-configmaps-907c5060-e528-11ea-87d5-0242ac11000a" in namespace "e2e-tests-configmap-6nn9w" to be "success or failure"
Aug 23 10:08:17.505: INFO: Pod "pod-configmaps-907c5060-e528-11ea-87d5-0242ac11000a": Phase="Pending", Reason="", readiness=false. Elapsed: 13.546177ms
Aug 23 10:08:19.509: INFO: Pod "pod-configmaps-907c5060-e528-11ea-87d5-0242ac11000a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01790746s
Aug 23 10:08:21.513: INFO: Pod "pod-configmaps-907c5060-e528-11ea-87d5-0242ac11000a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.021594733s
Aug 23 10:08:23.517: INFO: Pod "pod-configmaps-907c5060-e528-11ea-87d5-0242ac11000a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.026254915s
STEP: Saw pod success
Aug 23 10:08:23.517: INFO: Pod "pod-configmaps-907c5060-e528-11ea-87d5-0242ac11000a" satisfied condition "success or failure"
Aug 23 10:08:23.520: INFO: Trying to get logs from node hunter-worker pod pod-configmaps-907c5060-e528-11ea-87d5-0242ac11000a container configmap-volume-test: 
STEP: delete the pod
Aug 23 10:08:23.558: INFO: Waiting for pod pod-configmaps-907c5060-e528-11ea-87d5-0242ac11000a to disappear
Aug 23 10:08:23.572: INFO: Pod pod-configmaps-907c5060-e528-11ea-87d5-0242ac11000a no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 23 10:08:23.572: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-6nn9w" for this suite.
Aug 23 10:08:29.587: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 23 10:08:29.613: INFO: namespace: e2e-tests-configmap-6nn9w, resource: bindings, ignored listing per whitelist
Aug 23 10:08:29.652: INFO: namespace e2e-tests-configmap-6nn9w deletion completed in 6.077012293s

• [SLOW TEST:12.553 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 23 10:08:29.653: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65
[It] deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Aug 23 10:08:29.790: INFO: Pod name cleanup-pod: Found 0 pods out of 1
Aug 23 10:08:34.795: INFO: Pod name cleanup-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Aug 23 10:08:34.795: INFO: Creating deployment test-cleanup-deployment
STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59
Aug 23 10:08:34.903: INFO: Deployment "test-cleanup-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment,GenerateName:,Namespace:e2e-tests-deployment-hkwvx,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-hkwvx/deployments/test-cleanup-deployment,UID:9adf604d-e528-11ea-a485-0242ac120004,ResourceVersion:1686222,Generation:1,CreationTimestamp:2020-08-23 10:08:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[],ReadyReplicas:0,CollisionCount:nil,},}

Aug 23 10:08:34.989: INFO: New ReplicaSet of Deployment "test-cleanup-deployment" is nil.
Aug 23 10:08:34.989: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment":
Aug 23 10:08:34.989: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-controller,GenerateName:,Namespace:e2e-tests-deployment-hkwvx,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-hkwvx/replicasets/test-cleanup-controller,UID:97e21f9b-e528-11ea-a485-0242ac120004,ResourceVersion:1686223,Generation:1,CreationTimestamp:2020-08-23 10:08:29 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 Deployment test-cleanup-deployment 9adf604d-e528-11ea-a485-0242ac120004 0xc002795be7 0xc002795be8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},}
Aug 23 10:08:35.059: INFO: Pod "test-cleanup-controller-mvsbv" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-controller-mvsbv,GenerateName:test-cleanup-controller-,Namespace:e2e-tests-deployment-hkwvx,SelfLink:/api/v1/namespaces/e2e-tests-deployment-hkwvx/pods/test-cleanup-controller-mvsbv,UID:97e3b6f1-e528-11ea-a485-0242ac120004,ResourceVersion:1686217,Generation:0,CreationTimestamp:2020-08-23 10:08:29 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-cleanup-controller 97e21f9b-e528-11ea-a485-0242ac120004 0xc0027ea267 0xc0027ea268}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-qkbls {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-qkbls,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-qkbls true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0027ea2e0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0027ea300}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-23 10:08:29 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-08-23 10:08:33 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-08-23 10:08:33 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-23 10:08:29 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.2,PodIP:10.244.1.113,StartTime:2020-08-23 10:08:29 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-08-23 10:08:32 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://b8704054855354d8d6c1242c8653e83d931419863352c8128183a87eb053d93d}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 23 10:08:35.059: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-deployment-hkwvx" for this suite.
Aug 23 10:08:45.556: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 23 10:08:46.012: INFO: namespace: e2e-tests-deployment-hkwvx, resource: bindings, ignored listing per whitelist
Aug 23 10:08:46.051: INFO: namespace e2e-tests-deployment-hkwvx deletion completed in 10.956703977s

• [SLOW TEST:16.398 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with configmap pod with mountPath of existing file [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 23 10:08:46.051: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with configmap pod with mountPath of existing file [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod pod-subpath-test-configmap-vlp9
STEP: Creating a pod to test atomic-volume-subpath
Aug 23 10:08:47.216: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-vlp9" in namespace "e2e-tests-subpath-ssqqp" to be "success or failure"
Aug 23 10:08:47.394: INFO: Pod "pod-subpath-test-configmap-vlp9": Phase="Pending", Reason="", readiness=false. Elapsed: 178.533155ms
Aug 23 10:08:49.398: INFO: Pod "pod-subpath-test-configmap-vlp9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.182410243s
Aug 23 10:08:51.402: INFO: Pod "pod-subpath-test-configmap-vlp9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.186689528s
Aug 23 10:08:53.406: INFO: Pod "pod-subpath-test-configmap-vlp9": Phase="Pending", Reason="", readiness=false. Elapsed: 6.190871008s
Aug 23 10:08:55.410: INFO: Pod "pod-subpath-test-configmap-vlp9": Phase="Running", Reason="", readiness=false. Elapsed: 8.194872126s
Aug 23 10:08:57.414: INFO: Pod "pod-subpath-test-configmap-vlp9": Phase="Running", Reason="", readiness=false. Elapsed: 10.198242726s
Aug 23 10:08:59.417: INFO: Pod "pod-subpath-test-configmap-vlp9": Phase="Running", Reason="", readiness=false. Elapsed: 12.201760415s
Aug 23 10:09:01.422: INFO: Pod "pod-subpath-test-configmap-vlp9": Phase="Running", Reason="", readiness=false. Elapsed: 14.206304881s
Aug 23 10:09:03.426: INFO: Pod "pod-subpath-test-configmap-vlp9": Phase="Running", Reason="", readiness=false. Elapsed: 16.210723651s
Aug 23 10:09:05.430: INFO: Pod "pod-subpath-test-configmap-vlp9": Phase="Running", Reason="", readiness=false. Elapsed: 18.214747867s
Aug 23 10:09:07.435: INFO: Pod "pod-subpath-test-configmap-vlp9": Phase="Running", Reason="", readiness=false. Elapsed: 20.219210317s
Aug 23 10:09:09.439: INFO: Pod "pod-subpath-test-configmap-vlp9": Phase="Running", Reason="", readiness=false. Elapsed: 22.22302637s
Aug 23 10:09:11.444: INFO: Pod "pod-subpath-test-configmap-vlp9": Phase="Running", Reason="", readiness=false. Elapsed: 24.228639097s
Aug 23 10:09:13.843: INFO: Pod "pod-subpath-test-configmap-vlp9": Phase="Running", Reason="", readiness=false. Elapsed: 26.627530601s
Aug 23 10:09:15.847: INFO: Pod "pod-subpath-test-configmap-vlp9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 28.631123866s
STEP: Saw pod success
Aug 23 10:09:15.847: INFO: Pod "pod-subpath-test-configmap-vlp9" satisfied condition "success or failure"
Aug 23 10:09:15.849: INFO: Trying to get logs from node hunter-worker2 pod pod-subpath-test-configmap-vlp9 container test-container-subpath-configmap-vlp9: 
STEP: delete the pod
Aug 23 10:09:16.600: INFO: Waiting for pod pod-subpath-test-configmap-vlp9 to disappear
Aug 23 10:09:16.829: INFO: Pod pod-subpath-test-configmap-vlp9 no longer exists
STEP: Deleting pod pod-subpath-test-configmap-vlp9
Aug 23 10:09:16.830: INFO: Deleting pod "pod-subpath-test-configmap-vlp9" in namespace "e2e-tests-subpath-ssqqp"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 23 10:09:16.831: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-subpath-ssqqp" for this suite.
Aug 23 10:09:23.091: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 23 10:09:23.099: INFO: namespace: e2e-tests-subpath-ssqqp, resource: bindings, ignored listing per whitelist
Aug 23 10:09:23.161: INFO: namespace e2e-tests-subpath-ssqqp deletion completed in 6.245597048s

• [SLOW TEST:37.110 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with configmap pod with mountPath of existing file [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl label 
  should update the label on a resource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 23 10:09:23.161: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1052
STEP: creating the pod
Aug 23 10:09:23.238: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-bzl7v'
Aug 23 10:09:23.619: INFO: stderr: ""
Aug 23 10:09:23.619: INFO: stdout: "pod/pause created\n"
Aug 23 10:09:23.619: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause]
Aug 23 10:09:23.619: INFO: Waiting up to 5m0s for pod "pause" in namespace "e2e-tests-kubectl-bzl7v" to be "running and ready"
Aug 23 10:09:23.645: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 26.718797ms
Aug 23 10:09:25.649: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030625164s
Aug 23 10:09:27.653: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 4.034035572s
Aug 23 10:09:29.656: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 6.037815356s
Aug 23 10:09:29.657: INFO: Pod "pause" satisfied condition "running and ready"
Aug 23 10:09:29.657: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause]
[It] should update the label on a resource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: adding the label testing-label with value testing-label-value to a pod
Aug 23 10:09:29.657: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=e2e-tests-kubectl-bzl7v'
Aug 23 10:09:29.747: INFO: stderr: ""
Aug 23 10:09:29.748: INFO: stdout: "pod/pause labeled\n"
STEP: verifying the pod has the label testing-label with the value testing-label-value
Aug 23 10:09:29.748: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=e2e-tests-kubectl-bzl7v'
Aug 23 10:09:30.037: INFO: stderr: ""
Aug 23 10:09:30.037: INFO: stdout: "NAME    READY   STATUS    RESTARTS   AGE   TESTING-LABEL\npause   1/1     Running   0          7s    testing-label-value\n"
STEP: removing the label testing-label of a pod
Aug 23 10:09:30.037: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=e2e-tests-kubectl-bzl7v'
Aug 23 10:09:30.221: INFO: stderr: ""
Aug 23 10:09:30.221: INFO: stdout: "pod/pause labeled\n"
STEP: verifying the pod doesn't have the label testing-label
Aug 23 10:09:30.221: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=e2e-tests-kubectl-bzl7v'
Aug 23 10:09:30.318: INFO: stderr: ""
Aug 23 10:09:30.318: INFO: stdout: "NAME    READY   STATUS    RESTARTS   AGE   TESTING-LABEL\npause   1/1     Running   0          7s    \n"
[AfterEach] [k8s.io] Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1059
STEP: using delete to clean up resources
Aug 23 10:09:30.318: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-bzl7v'
Aug 23 10:09:30.528: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Aug 23 10:09:30.528: INFO: stdout: "pod \"pause\" force deleted\n"
Aug 23 10:09:30.528: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=e2e-tests-kubectl-bzl7v'
Aug 23 10:09:30.630: INFO: stderr: "No resources found.\n"
Aug 23 10:09:30.630: INFO: stdout: ""
Aug 23 10:09:30.630: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=e2e-tests-kubectl-bzl7v -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Aug 23 10:09:30.732: INFO: stderr: ""
Aug 23 10:09:30.732: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 23 10:09:30.732: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-bzl7v" for this suite.
Aug 23 10:09:38.924: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 23 10:09:38.979: INFO: namespace: e2e-tests-kubectl-bzl7v, resource: bindings, ignored listing per whitelist
Aug 23 10:09:38.993: INFO: namespace e2e-tests-kubectl-bzl7v deletion completed in 8.257302018s

• [SLOW TEST:15.832 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should update the label on a resource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a read only busybox container 
  should not write to root filesystem [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 23 10:09:38.993: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should not write to root filesystem [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 23 10:09:45.598: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubelet-test-hlbqc" for this suite.
Aug 23 10:10:31.642: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 23 10:10:31.700: INFO: namespace: e2e-tests-kubelet-test-hlbqc, resource: bindings, ignored listing per whitelist
Aug 23 10:10:31.733: INFO: namespace e2e-tests-kubelet-test-hlbqc deletion completed in 46.131197562s

• [SLOW TEST:52.740 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when scheduling a read only busybox container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:186
    should not write to root filesystem [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 23 10:10:31.733: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir-wrapper
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Cleaning up the secret
STEP: Cleaning up the configmap
STEP: Cleaning up the pod
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 23 10:10:36.070: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-wrapper-h5d2n" for this suite.
Aug 23 10:10:42.157: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 23 10:10:42.166: INFO: namespace: e2e-tests-emptydir-wrapper-h5d2n, resource: bindings, ignored listing per whitelist
Aug 23 10:10:42.229: INFO: namespace e2e-tests-emptydir-wrapper-h5d2n deletion completed in 6.142649292s

• [SLOW TEST:10.496 seconds]
[sig-storage] EmptyDir wrapper volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Update Demo 
  should do a rolling update of a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 23 10:10:42.229: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:295
[It] should do a rolling update of a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the initial replication controller
Aug 23 10:10:42.553: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-qfgjz'
Aug 23 10:10:42.832: INFO: stderr: ""
Aug 23 10:10:42.832: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Aug 23 10:10:42.832: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-qfgjz'
Aug 23 10:10:42.965: INFO: stderr: ""
Aug 23 10:10:42.965: INFO: stdout: "update-demo-nautilus-8mbd8 update-demo-nautilus-vsnbx "
Aug 23 10:10:42.965: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8mbd8 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-qfgjz'
Aug 23 10:10:43.086: INFO: stderr: ""
Aug 23 10:10:43.086: INFO: stdout: ""
Aug 23 10:10:43.086: INFO: update-demo-nautilus-8mbd8 is created but not running
Aug 23 10:10:48.087: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-qfgjz'
Aug 23 10:10:48.389: INFO: stderr: ""
Aug 23 10:10:48.389: INFO: stdout: "update-demo-nautilus-8mbd8 update-demo-nautilus-vsnbx "
Aug 23 10:10:48.389: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8mbd8 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-qfgjz'
Aug 23 10:10:48.571: INFO: stderr: ""
Aug 23 10:10:48.571: INFO: stdout: ""
Aug 23 10:10:48.571: INFO: update-demo-nautilus-8mbd8 is created but not running
Aug 23 10:10:53.571: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-qfgjz'
Aug 23 10:10:53.659: INFO: stderr: ""
Aug 23 10:10:53.659: INFO: stdout: "update-demo-nautilus-8mbd8 update-demo-nautilus-vsnbx "
Aug 23 10:10:53.659: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8mbd8 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-qfgjz'
Aug 23 10:10:53.772: INFO: stderr: ""
Aug 23 10:10:53.772: INFO: stdout: "true"
Aug 23 10:10:53.772: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8mbd8 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-qfgjz'
Aug 23 10:10:53.863: INFO: stderr: ""
Aug 23 10:10:53.863: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Aug 23 10:10:53.863: INFO: validating pod update-demo-nautilus-8mbd8
Aug 23 10:10:53.867: INFO: got data: {
  "image": "nautilus.jpg"
}

Aug 23 10:10:53.867: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Aug 23 10:10:53.867: INFO: update-demo-nautilus-8mbd8 is verified up and running
Aug 23 10:10:53.867: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-vsnbx -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-qfgjz'
Aug 23 10:10:53.969: INFO: stderr: ""
Aug 23 10:10:53.969: INFO: stdout: "true"
Aug 23 10:10:53.969: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-vsnbx -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-qfgjz'
Aug 23 10:10:54.081: INFO: stderr: ""
Aug 23 10:10:54.081: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Aug 23 10:10:54.081: INFO: validating pod update-demo-nautilus-vsnbx
Aug 23 10:10:54.084: INFO: got data: {
  "image": "nautilus.jpg"
}

Aug 23 10:10:54.084: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Aug 23 10:10:54.084: INFO: update-demo-nautilus-vsnbx is verified up and running
STEP: rolling-update to new replication controller
Aug 23 10:10:54.086: INFO: scanned /root for discovery docs: 
Aug 23 10:10:54.086: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=e2e-tests-kubectl-qfgjz'
Aug 23 10:11:18.273: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n"
Aug 23 10:11:18.273: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Aug 23 10:11:18.273: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-qfgjz'
Aug 23 10:11:18.553: INFO: stderr: ""
Aug 23 10:11:18.553: INFO: stdout: "update-demo-kitten-jbzmg update-demo-kitten-kvj5r "
Aug 23 10:11:18.553: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-jbzmg -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-qfgjz'
Aug 23 10:11:18.668: INFO: stderr: ""
Aug 23 10:11:18.668: INFO: stdout: "true"
Aug 23 10:11:18.668: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-jbzmg -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-qfgjz'
Aug 23 10:11:18.759: INFO: stderr: ""
Aug 23 10:11:18.759: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0"
Aug 23 10:11:18.759: INFO: validating pod update-demo-kitten-jbzmg
Aug 23 10:11:18.763: INFO: got data: {
  "image": "kitten.jpg"
}

Aug 23 10:11:18.763: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg .
Aug 23 10:11:18.763: INFO: update-demo-kitten-jbzmg is verified up and running
Aug 23 10:11:18.763: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-kvj5r -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-qfgjz'
Aug 23 10:11:18.861: INFO: stderr: ""
Aug 23 10:11:18.861: INFO: stdout: "true"
Aug 23 10:11:18.861: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-kvj5r -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-qfgjz'
Aug 23 10:11:18.965: INFO: stderr: ""
Aug 23 10:11:18.965: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0"
Aug 23 10:11:18.965: INFO: validating pod update-demo-kitten-kvj5r
Aug 23 10:11:18.968: INFO: got data: {
  "image": "kitten.jpg"
}

Aug 23 10:11:18.968: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg .
Aug 23 10:11:18.968: INFO: update-demo-kitten-kvj5r is verified up and running
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 23 10:11:18.968: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-qfgjz" for this suite.
Aug 23 10:11:42.986: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 23 10:11:43.054: INFO: namespace: e2e-tests-kubectl-qfgjz, resource: bindings, ignored listing per whitelist
Aug 23 10:11:43.076: INFO: namespace e2e-tests-kubectl-qfgjz deletion completed in 24.105147363s

• [SLOW TEST:60.847 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should do a rolling update of a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 23 10:11:43.076: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-map-0b58559b-e529-11ea-87d5-0242ac11000a
STEP: Creating a pod to test consume secrets
Aug 23 10:11:43.584: INFO: Waiting up to 5m0s for pod "pod-secrets-0b58e27a-e529-11ea-87d5-0242ac11000a" in namespace "e2e-tests-secrets-khbh9" to be "success or failure"
Aug 23 10:11:43.676: INFO: Pod "pod-secrets-0b58e27a-e529-11ea-87d5-0242ac11000a": Phase="Pending", Reason="", readiness=false. Elapsed: 91.816038ms
Aug 23 10:11:45.712: INFO: Pod "pod-secrets-0b58e27a-e529-11ea-87d5-0242ac11000a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.127620664s
Aug 23 10:11:47.718: INFO: Pod "pod-secrets-0b58e27a-e529-11ea-87d5-0242ac11000a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.133605504s
Aug 23 10:11:49.874: INFO: Pod "pod-secrets-0b58e27a-e529-11ea-87d5-0242ac11000a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.289575284s
Aug 23 10:11:51.877: INFO: Pod "pod-secrets-0b58e27a-e529-11ea-87d5-0242ac11000a": Phase="Running", Reason="", readiness=true. Elapsed: 8.293377659s
Aug 23 10:11:53.881: INFO: Pod "pod-secrets-0b58e27a-e529-11ea-87d5-0242ac11000a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.297158979s
STEP: Saw pod success
Aug 23 10:11:53.881: INFO: Pod "pod-secrets-0b58e27a-e529-11ea-87d5-0242ac11000a" satisfied condition "success or failure"
Aug 23 10:11:53.883: INFO: Trying to get logs from node hunter-worker pod pod-secrets-0b58e27a-e529-11ea-87d5-0242ac11000a container secret-volume-test: 
STEP: delete the pod
Aug 23 10:11:53.973: INFO: Waiting for pod pod-secrets-0b58e27a-e529-11ea-87d5-0242ac11000a to disappear
Aug 23 10:11:54.040: INFO: Pod pod-secrets-0b58e27a-e529-11ea-87d5-0242ac11000a no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 23 10:11:54.040: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-khbh9" for this suite.
Aug 23 10:12:00.055: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 23 10:12:00.365: INFO: namespace: e2e-tests-secrets-khbh9, resource: bindings, ignored listing per whitelist
Aug 23 10:12:00.396: INFO: namespace e2e-tests-secrets-khbh9 deletion completed in 6.351459765s

• [SLOW TEST:17.319 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl patch 
  should add annotations for pods in rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 23 10:12:00.396: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should add annotations for pods in rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating Redis RC
Aug 23 10:12:00.731: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-zqcr2'
Aug 23 10:12:01.022: INFO: stderr: ""
Aug 23 10:12:01.022: INFO: stdout: "replicationcontroller/redis-master created\n"
STEP: Waiting for Redis master to start.
Aug 23 10:12:02.025: INFO: Selector matched 1 pods for map[app:redis]
Aug 23 10:12:02.025: INFO: Found 0 / 1
Aug 23 10:12:03.026: INFO: Selector matched 1 pods for map[app:redis]
Aug 23 10:12:03.026: INFO: Found 0 / 1
Aug 23 10:12:04.025: INFO: Selector matched 1 pods for map[app:redis]
Aug 23 10:12:04.025: INFO: Found 0 / 1
Aug 23 10:12:05.095: INFO: Selector matched 1 pods for map[app:redis]
Aug 23 10:12:05.095: INFO: Found 0 / 1
Aug 23 10:12:06.026: INFO: Selector matched 1 pods for map[app:redis]
Aug 23 10:12:06.026: INFO: Found 1 / 1
Aug 23 10:12:06.026: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
STEP: patching all pods
Aug 23 10:12:06.029: INFO: Selector matched 1 pods for map[app:redis]
Aug 23 10:12:06.029: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Aug 23 10:12:06.029: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config patch pod redis-master-hzhhj --namespace=e2e-tests-kubectl-zqcr2 -p {"metadata":{"annotations":{"x":"y"}}}'
Aug 23 10:12:06.140: INFO: stderr: ""
Aug 23 10:12:06.140: INFO: stdout: "pod/redis-master-hzhhj patched\n"
STEP: checking annotations
Aug 23 10:12:06.216: INFO: Selector matched 1 pods for map[app:redis]
Aug 23 10:12:06.216: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 23 10:12:06.216: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-zqcr2" for this suite.
Aug 23 10:12:28.238: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 23 10:12:28.259: INFO: namespace: e2e-tests-kubectl-zqcr2, resource: bindings, ignored listing per whitelist
Aug 23 10:12:28.296: INFO: namespace e2e-tests-kubectl-zqcr2 deletion completed in 22.076909671s

• [SLOW TEST:27.900 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl patch
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should add annotations for pods in rc  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 23 10:12:28.296: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79
Aug 23 10:12:28.437: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Aug 23 10:12:28.445: INFO: Waiting for terminating namespaces to be deleted...
Aug 23 10:12:28.447: INFO: 
Logging pods the kubelet thinks is on node hunter-worker before test
Aug 23 10:12:28.452: INFO: kindnet-kvcmt from kube-system started at 2020-08-15 09:33:02 +0000 UTC (1 container statuses recorded)
Aug 23 10:12:28.452: INFO: 	Container kindnet-cni ready: true, restart count 0
Aug 23 10:12:28.452: INFO: kube-proxy-xm64c from kube-system started at 2020-08-15 09:32:58 +0000 UTC (1 container statuses recorded)
Aug 23 10:12:28.452: INFO: 	Container kube-proxy ready: true, restart count 0
Aug 23 10:12:28.452: INFO: 
Logging pods the kubelet thinks is on node hunter-worker2 before test
Aug 23 10:12:28.456: INFO: kube-proxy-7x47x from kube-system started at 2020-08-15 09:33:02 +0000 UTC (1 container statuses recorded)
Aug 23 10:12:28.456: INFO: 	Container kube-proxy ready: true, restart count 0
Aug 23 10:12:28.456: INFO: kindnet-l4sc5 from kube-system started at 2020-08-15 09:33:02 +0000 UTC (1 container statuses recorded)
Aug 23 10:12:28.456: INFO: 	Container kindnet-cni ready: true, restart count 0
[It] validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Trying to schedule Pod with nonempty NodeSelector.
STEP: Considering event: 
Type = [Warning], Name = [restricted-pod.162dde0348611698], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match node selector.]
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 23 10:12:29.471: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-sched-pred-nq7b5" for this suite.
Aug 23 10:12:35.493: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 23 10:12:35.583: INFO: namespace: e2e-tests-sched-pred-nq7b5, resource: bindings, ignored listing per whitelist
Aug 23 10:12:35.586: INFO: namespace e2e-tests-sched-pred-nq7b5 deletion completed in 6.111038805s
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70

• [SLOW TEST:7.289 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22
  validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 23 10:12:35.586: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102
[It] should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating simple DaemonSet "daemon-set"
STEP: Check that daemon pods launch on every node of the cluster.
Aug 23 10:12:35.736: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 23 10:12:35.739: INFO: Number of nodes with available pods: 0
Aug 23 10:12:35.739: INFO: Node hunter-worker is running more than one daemon pod
Aug 23 10:12:36.863: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 23 10:12:36.867: INFO: Number of nodes with available pods: 0
Aug 23 10:12:36.867: INFO: Node hunter-worker is running more than one daemon pod
Aug 23 10:12:38.080: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 23 10:12:38.083: INFO: Number of nodes with available pods: 0
Aug 23 10:12:38.083: INFO: Node hunter-worker is running more than one daemon pod
Aug 23 10:12:38.970: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 23 10:12:38.973: INFO: Number of nodes with available pods: 0
Aug 23 10:12:38.973: INFO: Node hunter-worker is running more than one daemon pod
Aug 23 10:12:40.239: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 23 10:12:40.724: INFO: Number of nodes with available pods: 0
Aug 23 10:12:40.724: INFO: Node hunter-worker is running more than one daemon pod
Aug 23 10:12:40.871: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 23 10:12:41.113: INFO: Number of nodes with available pods: 0
Aug 23 10:12:41.113: INFO: Node hunter-worker is running more than one daemon pod
Aug 23 10:12:42.096: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 23 10:12:42.104: INFO: Number of nodes with available pods: 0
Aug 23 10:12:42.104: INFO: Node hunter-worker is running more than one daemon pod
Aug 23 10:12:42.952: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 23 10:12:43.119: INFO: Number of nodes with available pods: 1
Aug 23 10:12:43.119: INFO: Node hunter-worker is running more than one daemon pod
Aug 23 10:12:43.742: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 23 10:12:43.745: INFO: Number of nodes with available pods: 2
Aug 23 10:12:43.745: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Stop a daemon pod, check that the daemon pod is revived.
Aug 23 10:12:43.826: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 23 10:12:43.828: INFO: Number of nodes with available pods: 1
Aug 23 10:12:43.828: INFO: Node hunter-worker is running more than one daemon pod
Aug 23 10:12:44.832: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 23 10:12:44.834: INFO: Number of nodes with available pods: 1
Aug 23 10:12:44.834: INFO: Node hunter-worker is running more than one daemon pod
Aug 23 10:12:45.835: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 23 10:12:45.837: INFO: Number of nodes with available pods: 1
Aug 23 10:12:45.837: INFO: Node hunter-worker is running more than one daemon pod
Aug 23 10:12:46.833: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 23 10:12:46.836: INFO: Number of nodes with available pods: 1
Aug 23 10:12:46.836: INFO: Node hunter-worker is running more than one daemon pod
Aug 23 10:12:47.833: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 23 10:12:47.836: INFO: Number of nodes with available pods: 1
Aug 23 10:12:47.836: INFO: Node hunter-worker is running more than one daemon pod
Aug 23 10:12:48.833: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 23 10:12:48.835: INFO: Number of nodes with available pods: 1
Aug 23 10:12:48.835: INFO: Node hunter-worker is running more than one daemon pod
Aug 23 10:12:49.832: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 23 10:12:49.834: INFO: Number of nodes with available pods: 1
Aug 23 10:12:49.834: INFO: Node hunter-worker is running more than one daemon pod
Aug 23 10:12:50.835: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 23 10:12:50.837: INFO: Number of nodes with available pods: 1
Aug 23 10:12:50.837: INFO: Node hunter-worker is running more than one daemon pod
Aug 23 10:12:51.832: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 23 10:12:51.835: INFO: Number of nodes with available pods: 1
Aug 23 10:12:51.835: INFO: Node hunter-worker is running more than one daemon pod
Aug 23 10:12:52.833: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 23 10:12:52.836: INFO: Number of nodes with available pods: 1
Aug 23 10:12:52.836: INFO: Node hunter-worker is running more than one daemon pod
Aug 23 10:12:53.832: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 23 10:12:53.834: INFO: Number of nodes with available pods: 1
Aug 23 10:12:53.834: INFO: Node hunter-worker is running more than one daemon pod
Aug 23 10:12:54.833: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 23 10:12:54.836: INFO: Number of nodes with available pods: 1
Aug 23 10:12:54.836: INFO: Node hunter-worker is running more than one daemon pod
Aug 23 10:12:55.856: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 23 10:12:55.858: INFO: Number of nodes with available pods: 1
Aug 23 10:12:55.858: INFO: Node hunter-worker is running more than one daemon pod
Aug 23 10:12:56.832: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 23 10:12:56.834: INFO: Number of nodes with available pods: 1
Aug 23 10:12:56.834: INFO: Node hunter-worker is running more than one daemon pod
Aug 23 10:12:57.889: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 23 10:12:57.892: INFO: Number of nodes with available pods: 1
Aug 23 10:12:57.892: INFO: Node hunter-worker is running more than one daemon pod
Aug 23 10:12:58.832: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 23 10:12:58.835: INFO: Number of nodes with available pods: 1
Aug 23 10:12:58.835: INFO: Node hunter-worker is running more than one daemon pod
Aug 23 10:12:59.833: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 23 10:12:59.836: INFO: Number of nodes with available pods: 1
Aug 23 10:12:59.836: INFO: Node hunter-worker is running more than one daemon pod
Aug 23 10:13:00.832: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 23 10:13:00.835: INFO: Number of nodes with available pods: 1
Aug 23 10:13:00.835: INFO: Node hunter-worker is running more than one daemon pod
Aug 23 10:13:01.833: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 23 10:13:01.836: INFO: Number of nodes with available pods: 2
Aug 23 10:13:01.836: INFO: Number of running nodes: 2, number of available pods: 2
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-b7z2k, will wait for the garbage collector to delete the pods
Aug 23 10:13:01.895: INFO: Deleting DaemonSet.extensions daemon-set took: 5.20512ms
Aug 23 10:13:01.995: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.174993ms
Aug 23 10:13:06.365: INFO: Number of nodes with available pods: 0
Aug 23 10:13:06.365: INFO: Number of running nodes: 0, number of available pods: 0
Aug 23 10:13:06.367: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-b7z2k/daemonsets","resourceVersion":"1687153"},"items":null}

Aug 23 10:13:06.556: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-b7z2k/pods","resourceVersion":"1687154"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 23 10:13:06.565: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-daemonsets-b7z2k" for this suite.
Aug 23 10:13:12.732: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 23 10:13:12.766: INFO: namespace: e2e-tests-daemonsets-b7z2k, resource: bindings, ignored listing per whitelist
Aug 23 10:13:12.785: INFO: namespace e2e-tests-daemonsets-b7z2k deletion completed in 6.217392129s

• [SLOW TEST:37.199 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-storage] Downward API volume 
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 23 10:13:12.786: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating the pod
Aug 23 10:13:24.791: INFO: Successfully updated pod "annotationupdate41e3559c-e529-11ea-87d5-0242ac11000a"
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 23 10:13:26.913: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-7ljff" for this suite.
Aug 23 10:13:50.950: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 23 10:13:51.042: INFO: namespace: e2e-tests-downward-api-7ljff, resource: bindings, ignored listing per whitelist
Aug 23 10:13:51.057: INFO: namespace e2e-tests-downward-api-7ljff deletion completed in 24.133618745s

• [SLOW TEST:38.272 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 23 10:13:51.057: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Aug 23 10:13:51.206: INFO: Waiting up to 5m0s for pod "downwardapi-volume-5769033c-e529-11ea-87d5-0242ac11000a" in namespace "e2e-tests-projected-ng6dh" to be "success or failure"
Aug 23 10:13:51.225: INFO: Pod "downwardapi-volume-5769033c-e529-11ea-87d5-0242ac11000a": Phase="Pending", Reason="", readiness=false. Elapsed: 19.056514ms
Aug 23 10:13:53.455: INFO: Pod "downwardapi-volume-5769033c-e529-11ea-87d5-0242ac11000a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.249306004s
Aug 23 10:13:55.767: INFO: Pod "downwardapi-volume-5769033c-e529-11ea-87d5-0242ac11000a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.56108047s
Aug 23 10:13:57.770: INFO: Pod "downwardapi-volume-5769033c-e529-11ea-87d5-0242ac11000a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.564127561s
STEP: Saw pod success
Aug 23 10:13:57.770: INFO: Pod "downwardapi-volume-5769033c-e529-11ea-87d5-0242ac11000a" satisfied condition "success or failure"
Aug 23 10:13:57.772: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-5769033c-e529-11ea-87d5-0242ac11000a container client-container: 
STEP: delete the pod
Aug 23 10:13:57.892: INFO: Waiting for pod downwardapi-volume-5769033c-e529-11ea-87d5-0242ac11000a to disappear
Aug 23 10:13:57.894: INFO: Pod downwardapi-volume-5769033c-e529-11ea-87d5-0242ac11000a no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 23 10:13:57.895: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-ng6dh" for this suite.
Aug 23 10:14:05.951: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 23 10:14:05.963: INFO: namespace: e2e-tests-projected-ng6dh, resource: bindings, ignored listing per whitelist
Aug 23 10:14:06.021: INFO: namespace e2e-tests-projected-ng6dh deletion completed in 8.123755287s

• [SLOW TEST:14.964 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 23 10:14:06.022: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-map-6080dc51-e529-11ea-87d5-0242ac11000a
STEP: Creating a pod to test consume configMaps
Aug 23 10:14:06.511: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-6092729c-e529-11ea-87d5-0242ac11000a" in namespace "e2e-tests-projected-jmbdc" to be "success or failure"
Aug 23 10:14:06.529: INFO: Pod "pod-projected-configmaps-6092729c-e529-11ea-87d5-0242ac11000a": Phase="Pending", Reason="", readiness=false. Elapsed: 17.488631ms
Aug 23 10:14:08.676: INFO: Pod "pod-projected-configmaps-6092729c-e529-11ea-87d5-0242ac11000a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.16464528s
Aug 23 10:14:10.682: INFO: Pod "pod-projected-configmaps-6092729c-e529-11ea-87d5-0242ac11000a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.170065799s
STEP: Saw pod success
Aug 23 10:14:10.682: INFO: Pod "pod-projected-configmaps-6092729c-e529-11ea-87d5-0242ac11000a" satisfied condition "success or failure"
Aug 23 10:14:10.684: INFO: Trying to get logs from node hunter-worker pod pod-projected-configmaps-6092729c-e529-11ea-87d5-0242ac11000a container projected-configmap-volume-test: 
STEP: delete the pod
Aug 23 10:14:10.704: INFO: Waiting for pod pod-projected-configmaps-6092729c-e529-11ea-87d5-0242ac11000a to disappear
Aug 23 10:14:10.766: INFO: Pod pod-projected-configmaps-6092729c-e529-11ea-87d5-0242ac11000a no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 23 10:14:10.766: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-jmbdc" for this suite.
Aug 23 10:14:16.790: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 23 10:14:16.827: INFO: namespace: e2e-tests-projected-jmbdc, resource: bindings, ignored listing per whitelist
Aug 23 10:14:16.932: INFO: namespace e2e-tests-projected-jmbdc deletion completed in 6.163263341s

• [SLOW TEST:10.911 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Update Demo 
  should create and stop a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 23 10:14:16.933: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:295
[It] should create and stop a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating a replication controller
Aug 23 10:14:17.074: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-7fptm'
Aug 23 10:14:19.753: INFO: stderr: ""
Aug 23 10:14:19.753: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Aug 23 10:14:19.753: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-7fptm'
Aug 23 10:14:19.869: INFO: stderr: ""
Aug 23 10:14:19.869: INFO: stdout: ""
STEP: Replicas for name=update-demo: expected=2 actual=0
Aug 23 10:14:24.870: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-7fptm'
Aug 23 10:14:25.245: INFO: stderr: ""
Aug 23 10:14:25.245: INFO: stdout: "update-demo-nautilus-6g7qq update-demo-nautilus-j978d "
Aug 23 10:14:25.245: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6g7qq -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-7fptm'
Aug 23 10:14:25.333: INFO: stderr: ""
Aug 23 10:14:25.333: INFO: stdout: ""
Aug 23 10:14:25.333: INFO: update-demo-nautilus-6g7qq is created but not running
Aug 23 10:14:30.334: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-7fptm'
Aug 23 10:14:30.456: INFO: stderr: ""
Aug 23 10:14:30.456: INFO: stdout: "update-demo-nautilus-6g7qq update-demo-nautilus-j978d "
Aug 23 10:14:30.456: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6g7qq -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-7fptm'
Aug 23 10:14:30.550: INFO: stderr: ""
Aug 23 10:14:30.550: INFO: stdout: "true"
Aug 23 10:14:30.550: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6g7qq -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-7fptm'
Aug 23 10:14:30.643: INFO: stderr: ""
Aug 23 10:14:30.643: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Aug 23 10:14:30.643: INFO: validating pod update-demo-nautilus-6g7qq
Aug 23 10:14:30.647: INFO: got data: {
  "image": "nautilus.jpg"
}

Aug 23 10:14:30.647: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Aug 23 10:14:30.647: INFO: update-demo-nautilus-6g7qq is verified up and running
Aug 23 10:14:30.647: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-j978d -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-7fptm'
Aug 23 10:14:30.744: INFO: stderr: ""
Aug 23 10:14:30.745: INFO: stdout: "true"
Aug 23 10:14:30.745: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-j978d -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-7fptm'
Aug 23 10:14:30.828: INFO: stderr: ""
Aug 23 10:14:30.828: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Aug 23 10:14:30.828: INFO: validating pod update-demo-nautilus-j978d
Aug 23 10:14:30.831: INFO: got data: {
  "image": "nautilus.jpg"
}

Aug 23 10:14:30.831: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Aug 23 10:14:30.831: INFO: update-demo-nautilus-j978d is verified up and running
STEP: using delete to clean up resources
Aug 23 10:14:30.831: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-7fptm'
Aug 23 10:14:31.161: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Aug 23 10:14:31.161: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n"
Aug 23 10:14:31.161: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=e2e-tests-kubectl-7fptm'
Aug 23 10:14:32.441: INFO: stderr: "No resources found.\n"
Aug 23 10:14:32.441: INFO: stdout: ""
Aug 23 10:14:32.441: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=e2e-tests-kubectl-7fptm -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Aug 23 10:14:32.750: INFO: stderr: ""
Aug 23 10:14:32.750: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 23 10:14:32.750: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-7fptm" for this suite.
Aug 23 10:14:59.064: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 23 10:14:59.084: INFO: namespace: e2e-tests-kubectl-7fptm, resource: bindings, ignored listing per whitelist
Aug 23 10:14:59.136: INFO: namespace e2e-tests-kubectl-7fptm deletion completed in 26.334843249s

• [SLOW TEST:42.204 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create and stop a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-storage] Secrets 
  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 23 10:14:59.136: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-8009093c-e529-11ea-87d5-0242ac11000a
STEP: Creating a pod to test consume secrets
Aug 23 10:14:59.410: INFO: Waiting up to 5m0s for pod "pod-secrets-8015392e-e529-11ea-87d5-0242ac11000a" in namespace "e2e-tests-secrets-k4ggf" to be "success or failure"
Aug 23 10:14:59.423: INFO: Pod "pod-secrets-8015392e-e529-11ea-87d5-0242ac11000a": Phase="Pending", Reason="", readiness=false. Elapsed: 12.661537ms
Aug 23 10:15:01.432: INFO: Pod "pod-secrets-8015392e-e529-11ea-87d5-0242ac11000a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02152369s
Aug 23 10:15:03.435: INFO: Pod "pod-secrets-8015392e-e529-11ea-87d5-0242ac11000a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.025178217s
STEP: Saw pod success
Aug 23 10:15:03.435: INFO: Pod "pod-secrets-8015392e-e529-11ea-87d5-0242ac11000a" satisfied condition "success or failure"
Aug 23 10:15:03.437: INFO: Trying to get logs from node hunter-worker2 pod pod-secrets-8015392e-e529-11ea-87d5-0242ac11000a container secret-volume-test: 
STEP: delete the pod
Aug 23 10:15:03.460: INFO: Waiting for pod pod-secrets-8015392e-e529-11ea-87d5-0242ac11000a to disappear
Aug 23 10:15:03.501: INFO: Pod pod-secrets-8015392e-e529-11ea-87d5-0242ac11000a no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 23 10:15:03.502: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-k4ggf" for this suite.
Aug 23 10:15:09.540: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 23 10:15:09.559: INFO: namespace: e2e-tests-secrets-k4ggf, resource: bindings, ignored listing per whitelist
Aug 23 10:15:09.609: INFO: namespace e2e-tests-secrets-k4ggf deletion completed in 6.087212114s
STEP: Destroying namespace "e2e-tests-secret-namespace-zlxsk" for this suite.
Aug 23 10:15:15.669: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 23 10:15:15.688: INFO: namespace: e2e-tests-secret-namespace-zlxsk, resource: bindings, ignored listing per whitelist
Aug 23 10:15:15.743: INFO: namespace e2e-tests-secret-namespace-zlxsk deletion completed in 6.13382499s

• [SLOW TEST:16.607 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 23 10:15:15.743: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward api env vars
Aug 23 10:15:15.900: INFO: Waiting up to 5m0s for pod "downward-api-89f18653-e529-11ea-87d5-0242ac11000a" in namespace "e2e-tests-downward-api-ngpkf" to be "success or failure"
Aug 23 10:15:15.910: INFO: Pod "downward-api-89f18653-e529-11ea-87d5-0242ac11000a": Phase="Pending", Reason="", readiness=false. Elapsed: 9.720872ms
Aug 23 10:15:17.914: INFO: Pod "downward-api-89f18653-e529-11ea-87d5-0242ac11000a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013727572s
Aug 23 10:15:19.977: INFO: Pod "downward-api-89f18653-e529-11ea-87d5-0242ac11000a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.076787989s
Aug 23 10:15:21.986: INFO: Pod "downward-api-89f18653-e529-11ea-87d5-0242ac11000a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.085697094s
STEP: Saw pod success
Aug 23 10:15:21.986: INFO: Pod "downward-api-89f18653-e529-11ea-87d5-0242ac11000a" satisfied condition "success or failure"
Aug 23 10:15:21.988: INFO: Trying to get logs from node hunter-worker pod downward-api-89f18653-e529-11ea-87d5-0242ac11000a container dapi-container: 
STEP: delete the pod
Aug 23 10:15:22.062: INFO: Waiting for pod downward-api-89f18653-e529-11ea-87d5-0242ac11000a to disappear
Aug 23 10:15:22.085: INFO: Pod downward-api-89f18653-e529-11ea-87d5-0242ac11000a no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 23 10:15:22.085: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-ngpkf" for this suite.
Aug 23 10:15:28.102: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 23 10:15:28.120: INFO: namespace: e2e-tests-downward-api-ngpkf, resource: bindings, ignored listing per whitelist
Aug 23 10:15:28.172: INFO: namespace e2e-tests-downward-api-ngpkf deletion completed in 6.082185309s

• [SLOW TEST:12.429 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38
  should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 23 10:15:28.173: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-91564a55-e529-11ea-87d5-0242ac11000a
STEP: Creating a pod to test consume secrets
Aug 23 10:15:28.336: INFO: Waiting up to 5m0s for pod "pod-secrets-91590520-e529-11ea-87d5-0242ac11000a" in namespace "e2e-tests-secrets-z2lr2" to be "success or failure"
Aug 23 10:15:28.346: INFO: Pod "pod-secrets-91590520-e529-11ea-87d5-0242ac11000a": Phase="Pending", Reason="", readiness=false. Elapsed: 9.628824ms
Aug 23 10:15:30.350: INFO: Pod "pod-secrets-91590520-e529-11ea-87d5-0242ac11000a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013364508s
Aug 23 10:15:32.354: INFO: Pod "pod-secrets-91590520-e529-11ea-87d5-0242ac11000a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.017501783s
Aug 23 10:15:34.358: INFO: Pod "pod-secrets-91590520-e529-11ea-87d5-0242ac11000a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.021352197s
STEP: Saw pod success
Aug 23 10:15:34.358: INFO: Pod "pod-secrets-91590520-e529-11ea-87d5-0242ac11000a" satisfied condition "success or failure"
Aug 23 10:15:34.360: INFO: Trying to get logs from node hunter-worker pod pod-secrets-91590520-e529-11ea-87d5-0242ac11000a container secret-volume-test: 
STEP: delete the pod
Aug 23 10:15:34.404: INFO: Waiting for pod pod-secrets-91590520-e529-11ea-87d5-0242ac11000a to disappear
Aug 23 10:15:34.468: INFO: Pod pod-secrets-91590520-e529-11ea-87d5-0242ac11000a no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 23 10:15:34.468: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-z2lr2" for this suite.
Aug 23 10:15:40.518: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 23 10:15:40.574: INFO: namespace: e2e-tests-secrets-z2lr2, resource: bindings, ignored listing per whitelist
Aug 23 10:15:40.588: INFO: namespace e2e-tests-secrets-z2lr2 deletion completed in 6.115921919s

• [SLOW TEST:12.415 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-network] Proxy version v1 
  should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 23 10:15:40.588: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Aug 23 10:15:40.783: INFO: (0) /api/v1/nodes/hunter-worker:10250/proxy/logs/: 
alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/
>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating projection with secret that has name projected-secret-test-9c8920a6-e529-11ea-87d5-0242ac11000a
STEP: Creating a pod to test consume secrets
Aug 23 10:15:47.106: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-9c89b431-e529-11ea-87d5-0242ac11000a" in namespace "e2e-tests-projected-99zlw" to be "success or failure"
Aug 23 10:15:47.139: INFO: Pod "pod-projected-secrets-9c89b431-e529-11ea-87d5-0242ac11000a": Phase="Pending", Reason="", readiness=false. Elapsed: 32.440671ms
Aug 23 10:15:49.143: INFO: Pod "pod-projected-secrets-9c89b431-e529-11ea-87d5-0242ac11000a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.036172936s
Aug 23 10:15:51.146: INFO: Pod "pod-projected-secrets-9c89b431-e529-11ea-87d5-0242ac11000a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.039474421s
STEP: Saw pod success
Aug 23 10:15:51.146: INFO: Pod "pod-projected-secrets-9c89b431-e529-11ea-87d5-0242ac11000a" satisfied condition "success or failure"
Aug 23 10:15:51.148: INFO: Trying to get logs from node hunter-worker pod pod-projected-secrets-9c89b431-e529-11ea-87d5-0242ac11000a container projected-secret-volume-test: 
STEP: delete the pod
Aug 23 10:15:51.209: INFO: Waiting for pod pod-projected-secrets-9c89b431-e529-11ea-87d5-0242ac11000a to disappear
Aug 23 10:15:51.283: INFO: Pod pod-projected-secrets-9c89b431-e529-11ea-87d5-0242ac11000a no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 23 10:15:51.283: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-99zlw" for this suite.
Aug 23 10:15:57.450: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 23 10:15:57.513: INFO: namespace: e2e-tests-projected-99zlw, resource: bindings, ignored listing per whitelist
Aug 23 10:15:57.526: INFO: namespace e2e-tests-projected-99zlw deletion completed in 6.224748144s

• [SLOW TEST:10.560 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0666,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 23 10:15:57.526: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0666 on tmpfs
Aug 23 10:15:58.210: INFO: Waiting up to 5m0s for pod "pod-a3073498-e529-11ea-87d5-0242ac11000a" in namespace "e2e-tests-emptydir-jddrn" to be "success or failure"
Aug 23 10:15:58.227: INFO: Pod "pod-a3073498-e529-11ea-87d5-0242ac11000a": Phase="Pending", Reason="", readiness=false. Elapsed: 17.685143ms
Aug 23 10:16:00.396: INFO: Pod "pod-a3073498-e529-11ea-87d5-0242ac11000a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.186508675s
Aug 23 10:16:02.432: INFO: Pod "pod-a3073498-e529-11ea-87d5-0242ac11000a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.22254001s
Aug 23 10:16:04.480: INFO: Pod "pod-a3073498-e529-11ea-87d5-0242ac11000a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.270219044s
STEP: Saw pod success
Aug 23 10:16:04.480: INFO: Pod "pod-a3073498-e529-11ea-87d5-0242ac11000a" satisfied condition "success or failure"
Aug 23 10:16:04.720: INFO: Trying to get logs from node hunter-worker2 pod pod-a3073498-e529-11ea-87d5-0242ac11000a container test-container: 
STEP: delete the pod
Aug 23 10:16:04.737: INFO: Waiting for pod pod-a3073498-e529-11ea-87d5-0242ac11000a to disappear
Aug 23 10:16:04.766: INFO: Pod pod-a3073498-e529-11ea-87d5-0242ac11000a no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 23 10:16:04.766: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-jddrn" for this suite.
Aug 23 10:16:10.806: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 23 10:16:10.829: INFO: namespace: e2e-tests-emptydir-jddrn, resource: bindings, ignored listing per whitelist
Aug 23 10:16:10.877: INFO: namespace e2e-tests-emptydir-jddrn deletion completed in 6.107631662s

• [SLOW TEST:13.351 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0666,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 23 10:16:10.877: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name projected-secret-test-aada0023-e529-11ea-87d5-0242ac11000a
STEP: Creating a pod to test consume secrets
Aug 23 10:16:11.213: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-aadd6612-e529-11ea-87d5-0242ac11000a" in namespace "e2e-tests-projected-kvjdv" to be "success or failure"
Aug 23 10:16:11.229: INFO: Pod "pod-projected-secrets-aadd6612-e529-11ea-87d5-0242ac11000a": Phase="Pending", Reason="", readiness=false. Elapsed: 15.624686ms
Aug 23 10:16:13.233: INFO: Pod "pod-projected-secrets-aadd6612-e529-11ea-87d5-0242ac11000a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020101098s
Aug 23 10:16:15.237: INFO: Pod "pod-projected-secrets-aadd6612-e529-11ea-87d5-0242ac11000a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.023358145s
Aug 23 10:16:17.240: INFO: Pod "pod-projected-secrets-aadd6612-e529-11ea-87d5-0242ac11000a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.02664318s
STEP: Saw pod success
Aug 23 10:16:17.240: INFO: Pod "pod-projected-secrets-aadd6612-e529-11ea-87d5-0242ac11000a" satisfied condition "success or failure"
Aug 23 10:16:17.242: INFO: Trying to get logs from node hunter-worker pod pod-projected-secrets-aadd6612-e529-11ea-87d5-0242ac11000a container secret-volume-test: 
STEP: delete the pod
Aug 23 10:16:17.292: INFO: Waiting for pod pod-projected-secrets-aadd6612-e529-11ea-87d5-0242ac11000a to disappear
Aug 23 10:16:17.300: INFO: Pod pod-projected-secrets-aadd6612-e529-11ea-87d5-0242ac11000a no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 23 10:16:17.300: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-kvjdv" for this suite.
Aug 23 10:16:23.315: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 23 10:16:23.388: INFO: namespace: e2e-tests-projected-kvjdv, resource: bindings, ignored listing per whitelist
Aug 23 10:16:23.403: INFO: namespace e2e-tests-projected-kvjdv deletion completed in 6.099543368s

• [SLOW TEST:12.526 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 23 10:16:23.403: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename namespaces
STEP: Waiting for a default service account to be provisioned in namespace
[It] should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a test namespace
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Creating a pod in the namespace
STEP: Waiting for the pod to have running status
STEP: Creating an uninitialized pod in the namespace
Aug 23 10:16:27.632: INFO: error from create uninitialized namespace: 
STEP: Deleting the namespace
STEP: Waiting for the namespace to be removed.
STEP: Recreating the namespace
STEP: Verifying there are no pods in the namespace
[AfterEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 23 10:16:51.740: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-namespaces-smnrw" for this suite.
Aug 23 10:16:57.754: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 23 10:16:57.776: INFO: namespace: e2e-tests-namespaces-smnrw, resource: bindings, ignored listing per whitelist
Aug 23 10:16:57.824: INFO: namespace e2e-tests-namespaces-smnrw deletion completed in 6.081310565s
STEP: Destroying namespace "e2e-tests-nsdeletetest-rjhhx" for this suite.
Aug 23 10:16:57.827: INFO: Namespace e2e-tests-nsdeletetest-rjhhx was already deleted
STEP: Destroying namespace "e2e-tests-nsdeletetest-v8md5" for this suite.
Aug 23 10:17:03.859: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 23 10:17:03.881: INFO: namespace: e2e-tests-nsdeletetest-v8md5, resource: bindings, ignored listing per whitelist
Aug 23 10:17:03.943: INFO: namespace e2e-tests-nsdeletetest-v8md5 deletion completed in 6.116590889s

• [SLOW TEST:40.540 seconds]
[sig-api-machinery] Namespaces [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 23 10:17:03.944: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a test headless service
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-624jx A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.e2e-tests-dns-624jx;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-624jx A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.e2e-tests-dns-624jx;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-624jx.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.e2e-tests-dns-624jx.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-624jx.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.e2e-tests-dns-624jx.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-624jx.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-624jx.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-624jx.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.e2e-tests-dns-624jx.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-624jx.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.e2e-tests-dns-624jx.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-624jx.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.e2e-tests-dns-624jx.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-624jx.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 217.91.99.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.99.91.217_udp@PTR;check="$$(dig +tcp +noall +answer +search 217.91.99.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.99.91.217_tcp@PTR;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-624jx A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.e2e-tests-dns-624jx;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-624jx A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.e2e-tests-dns-624jx;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-624jx.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.e2e-tests-dns-624jx.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-624jx.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.e2e-tests-dns-624jx.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-624jx.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-624jx.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-624jx.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-624jx.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-624jx.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.e2e-tests-dns-624jx.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-624jx.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.e2e-tests-dns-624jx.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-624jx.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 217.91.99.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.99.91.217_udp@PTR;check="$$(dig +tcp +noall +answer +search 217.91.99.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.99.91.217_tcp@PTR;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Aug 23 10:17:20.233: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-624jx/dns-test-ca70b452-e529-11ea-87d5-0242ac11000a: the server could not find the requested resource (get pods dns-test-ca70b452-e529-11ea-87d5-0242ac11000a)
Aug 23 10:17:20.264: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-624jx/dns-test-ca70b452-e529-11ea-87d5-0242ac11000a: the server could not find the requested resource (get pods dns-test-ca70b452-e529-11ea-87d5-0242ac11000a)
Aug 23 10:17:20.268: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-624jx from pod e2e-tests-dns-624jx/dns-test-ca70b452-e529-11ea-87d5-0242ac11000a: the server could not find the requested resource (get pods dns-test-ca70b452-e529-11ea-87d5-0242ac11000a)
Aug 23 10:17:20.272: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-624jx from pod e2e-tests-dns-624jx/dns-test-ca70b452-e529-11ea-87d5-0242ac11000a: the server could not find the requested resource (get pods dns-test-ca70b452-e529-11ea-87d5-0242ac11000a)
Aug 23 10:17:20.275: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-624jx.svc from pod e2e-tests-dns-624jx/dns-test-ca70b452-e529-11ea-87d5-0242ac11000a: the server could not find the requested resource (get pods dns-test-ca70b452-e529-11ea-87d5-0242ac11000a)
Aug 23 10:17:20.279: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-624jx.svc from pod e2e-tests-dns-624jx/dns-test-ca70b452-e529-11ea-87d5-0242ac11000a: the server could not find the requested resource (get pods dns-test-ca70b452-e529-11ea-87d5-0242ac11000a)
Aug 23 10:17:20.282: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-624jx.svc from pod e2e-tests-dns-624jx/dns-test-ca70b452-e529-11ea-87d5-0242ac11000a: the server could not find the requested resource (get pods dns-test-ca70b452-e529-11ea-87d5-0242ac11000a)
Aug 23 10:17:20.285: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-624jx.svc from pod e2e-tests-dns-624jx/dns-test-ca70b452-e529-11ea-87d5-0242ac11000a: the server could not find the requested resource (get pods dns-test-ca70b452-e529-11ea-87d5-0242ac11000a)
Aug 23 10:17:20.302: INFO: Lookups using e2e-tests-dns-624jx/dns-test-ca70b452-e529-11ea-87d5-0242ac11000a failed for: [jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-624jx jessie_tcp@dns-test-service.e2e-tests-dns-624jx jessie_udp@dns-test-service.e2e-tests-dns-624jx.svc jessie_tcp@dns-test-service.e2e-tests-dns-624jx.svc jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-624jx.svc jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-624jx.svc]

Aug 23 10:17:25.662: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-624jx/dns-test-ca70b452-e529-11ea-87d5-0242ac11000a: the server could not find the requested resource (get pods dns-test-ca70b452-e529-11ea-87d5-0242ac11000a)
Aug 23 10:17:25.663: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-624jx/dns-test-ca70b452-e529-11ea-87d5-0242ac11000a: the server could not find the requested resource (get pods dns-test-ca70b452-e529-11ea-87d5-0242ac11000a)
Aug 23 10:17:25.666: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-624jx from pod e2e-tests-dns-624jx/dns-test-ca70b452-e529-11ea-87d5-0242ac11000a: the server could not find the requested resource (get pods dns-test-ca70b452-e529-11ea-87d5-0242ac11000a)
Aug 23 10:17:25.668: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-624jx from pod e2e-tests-dns-624jx/dns-test-ca70b452-e529-11ea-87d5-0242ac11000a: the server could not find the requested resource (get pods dns-test-ca70b452-e529-11ea-87d5-0242ac11000a)
Aug 23 10:17:25.670: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-624jx.svc from pod e2e-tests-dns-624jx/dns-test-ca70b452-e529-11ea-87d5-0242ac11000a: the server could not find the requested resource (get pods dns-test-ca70b452-e529-11ea-87d5-0242ac11000a)
Aug 23 10:17:25.672: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-624jx.svc from pod e2e-tests-dns-624jx/dns-test-ca70b452-e529-11ea-87d5-0242ac11000a: the server could not find the requested resource (get pods dns-test-ca70b452-e529-11ea-87d5-0242ac11000a)
Aug 23 10:17:25.722: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-624jx.svc from pod e2e-tests-dns-624jx/dns-test-ca70b452-e529-11ea-87d5-0242ac11000a: the server could not find the requested resource (get pods dns-test-ca70b452-e529-11ea-87d5-0242ac11000a)
Aug 23 10:17:25.725: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-624jx.svc from pod e2e-tests-dns-624jx/dns-test-ca70b452-e529-11ea-87d5-0242ac11000a: the server could not find the requested resource (get pods dns-test-ca70b452-e529-11ea-87d5-0242ac11000a)
Aug 23 10:17:25.888: INFO: Lookups using e2e-tests-dns-624jx/dns-test-ca70b452-e529-11ea-87d5-0242ac11000a failed for: [jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-624jx jessie_tcp@dns-test-service.e2e-tests-dns-624jx jessie_udp@dns-test-service.e2e-tests-dns-624jx.svc jessie_tcp@dns-test-service.e2e-tests-dns-624jx.svc jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-624jx.svc jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-624jx.svc]

Aug 23 10:17:30.975: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-624jx/dns-test-ca70b452-e529-11ea-87d5-0242ac11000a: the server could not find the requested resource (get pods dns-test-ca70b452-e529-11ea-87d5-0242ac11000a)
Aug 23 10:17:30.977: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-624jx/dns-test-ca70b452-e529-11ea-87d5-0242ac11000a: the server could not find the requested resource (get pods dns-test-ca70b452-e529-11ea-87d5-0242ac11000a)
Aug 23 10:17:30.979: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-624jx from pod e2e-tests-dns-624jx/dns-test-ca70b452-e529-11ea-87d5-0242ac11000a: the server could not find the requested resource (get pods dns-test-ca70b452-e529-11ea-87d5-0242ac11000a)
Aug 23 10:17:30.980: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-624jx from pod e2e-tests-dns-624jx/dns-test-ca70b452-e529-11ea-87d5-0242ac11000a: the server could not find the requested resource (get pods dns-test-ca70b452-e529-11ea-87d5-0242ac11000a)
Aug 23 10:17:30.982: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-624jx.svc from pod e2e-tests-dns-624jx/dns-test-ca70b452-e529-11ea-87d5-0242ac11000a: the server could not find the requested resource (get pods dns-test-ca70b452-e529-11ea-87d5-0242ac11000a)
Aug 23 10:17:30.984: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-624jx.svc from pod e2e-tests-dns-624jx/dns-test-ca70b452-e529-11ea-87d5-0242ac11000a: the server could not find the requested resource (get pods dns-test-ca70b452-e529-11ea-87d5-0242ac11000a)
Aug 23 10:17:30.986: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-624jx.svc from pod e2e-tests-dns-624jx/dns-test-ca70b452-e529-11ea-87d5-0242ac11000a: the server could not find the requested resource (get pods dns-test-ca70b452-e529-11ea-87d5-0242ac11000a)
Aug 23 10:17:30.988: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-624jx.svc from pod e2e-tests-dns-624jx/dns-test-ca70b452-e529-11ea-87d5-0242ac11000a: the server could not find the requested resource (get pods dns-test-ca70b452-e529-11ea-87d5-0242ac11000a)
Aug 23 10:17:31.003: INFO: Lookups using e2e-tests-dns-624jx/dns-test-ca70b452-e529-11ea-87d5-0242ac11000a failed for: [jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-624jx jessie_tcp@dns-test-service.e2e-tests-dns-624jx jessie_udp@dns-test-service.e2e-tests-dns-624jx.svc jessie_tcp@dns-test-service.e2e-tests-dns-624jx.svc jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-624jx.svc jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-624jx.svc]

Aug 23 10:17:35.407: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-624jx/dns-test-ca70b452-e529-11ea-87d5-0242ac11000a: the server could not find the requested resource (get pods dns-test-ca70b452-e529-11ea-87d5-0242ac11000a)
Aug 23 10:17:35.409: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-624jx/dns-test-ca70b452-e529-11ea-87d5-0242ac11000a: the server could not find the requested resource (get pods dns-test-ca70b452-e529-11ea-87d5-0242ac11000a)
Aug 23 10:17:35.411: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-624jx from pod e2e-tests-dns-624jx/dns-test-ca70b452-e529-11ea-87d5-0242ac11000a: the server could not find the requested resource (get pods dns-test-ca70b452-e529-11ea-87d5-0242ac11000a)
Aug 23 10:17:35.413: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-624jx from pod e2e-tests-dns-624jx/dns-test-ca70b452-e529-11ea-87d5-0242ac11000a: the server could not find the requested resource (get pods dns-test-ca70b452-e529-11ea-87d5-0242ac11000a)
Aug 23 10:17:35.415: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-624jx.svc from pod e2e-tests-dns-624jx/dns-test-ca70b452-e529-11ea-87d5-0242ac11000a: the server could not find the requested resource (get pods dns-test-ca70b452-e529-11ea-87d5-0242ac11000a)
Aug 23 10:17:35.418: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-624jx.svc from pod e2e-tests-dns-624jx/dns-test-ca70b452-e529-11ea-87d5-0242ac11000a: the server could not find the requested resource (get pods dns-test-ca70b452-e529-11ea-87d5-0242ac11000a)
Aug 23 10:17:35.420: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-624jx.svc from pod e2e-tests-dns-624jx/dns-test-ca70b452-e529-11ea-87d5-0242ac11000a: the server could not find the requested resource (get pods dns-test-ca70b452-e529-11ea-87d5-0242ac11000a)
Aug 23 10:17:35.422: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-624jx.svc from pod e2e-tests-dns-624jx/dns-test-ca70b452-e529-11ea-87d5-0242ac11000a: the server could not find the requested resource (get pods dns-test-ca70b452-e529-11ea-87d5-0242ac11000a)
Aug 23 10:17:35.436: INFO: Lookups using e2e-tests-dns-624jx/dns-test-ca70b452-e529-11ea-87d5-0242ac11000a failed for: [jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-624jx jessie_tcp@dns-test-service.e2e-tests-dns-624jx jessie_udp@dns-test-service.e2e-tests-dns-624jx.svc jessie_tcp@dns-test-service.e2e-tests-dns-624jx.svc jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-624jx.svc jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-624jx.svc]

Aug 23 10:17:40.442: INFO: DNS probes using e2e-tests-dns-624jx/dns-test-ca70b452-e529-11ea-87d5-0242ac11000a succeeded

STEP: deleting the pod
STEP: deleting the test service
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 23 10:17:42.377: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-dns-624jx" for this suite.
Aug 23 10:17:48.922: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 23 10:17:48.982: INFO: namespace: e2e-tests-dns-624jx, resource: bindings, ignored listing per whitelist
Aug 23 10:17:49.002: INFO: namespace e2e-tests-dns-624jx deletion completed in 6.536675077s

• [SLOW TEST:45.059 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-api-machinery] Secrets 
  should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 23 10:17:49.003: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-e543f8e3-e529-11ea-87d5-0242ac11000a
STEP: Creating a pod to test consume secrets
Aug 23 10:17:49.150: INFO: Waiting up to 5m0s for pod "pod-secrets-e5477335-e529-11ea-87d5-0242ac11000a" in namespace "e2e-tests-secrets-qj5wb" to be "success or failure"
Aug 23 10:17:49.196: INFO: Pod "pod-secrets-e5477335-e529-11ea-87d5-0242ac11000a": Phase="Pending", Reason="", readiness=false. Elapsed: 46.34392ms
Aug 23 10:17:51.201: INFO: Pod "pod-secrets-e5477335-e529-11ea-87d5-0242ac11000a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.051116178s
Aug 23 10:17:53.205: INFO: Pod "pod-secrets-e5477335-e529-11ea-87d5-0242ac11000a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.055451197s
STEP: Saw pod success
Aug 23 10:17:53.205: INFO: Pod "pod-secrets-e5477335-e529-11ea-87d5-0242ac11000a" satisfied condition "success or failure"
Aug 23 10:17:53.209: INFO: Trying to get logs from node hunter-worker pod pod-secrets-e5477335-e529-11ea-87d5-0242ac11000a container secret-env-test: 
STEP: delete the pod
Aug 23 10:17:53.232: INFO: Waiting for pod pod-secrets-e5477335-e529-11ea-87d5-0242ac11000a to disappear
Aug 23 10:17:53.255: INFO: Pod pod-secrets-e5477335-e529-11ea-87d5-0242ac11000a no longer exists
[AfterEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 23 10:17:53.255: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-qj5wb" for this suite.
Aug 23 10:17:59.277: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 23 10:17:59.287: INFO: namespace: e2e-tests-secrets-qj5wb, resource: bindings, ignored listing per whitelist
Aug 23 10:17:59.354: INFO: namespace e2e-tests-secrets-qj5wb deletion completed in 6.09585995s

• [SLOW TEST:10.352 seconds]
[sig-api-machinery] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:32
  should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-api-machinery] Watchers 
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 23 10:17:59.355: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating a watch on configmaps with label A
STEP: creating a watch on configmaps with label B
STEP: creating a watch on configmaps with label A or B
STEP: creating a configmap with label A and ensuring the correct watchers observe the notification
Aug 23 10:17:59.483: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-9dl84,SelfLink:/api/v1/namespaces/e2e-tests-watch-9dl84/configmaps/e2e-watch-test-configmap-a,UID:eb720a7b-e529-11ea-a485-0242ac120004,ResourceVersion:1688167,Generation:0,CreationTimestamp:2020-08-23 10:17:59 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
Aug 23 10:17:59.483: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-9dl84,SelfLink:/api/v1/namespaces/e2e-tests-watch-9dl84/configmaps/e2e-watch-test-configmap-a,UID:eb720a7b-e529-11ea-a485-0242ac120004,ResourceVersion:1688167,Generation:0,CreationTimestamp:2020-08-23 10:17:59 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
STEP: modifying configmap A and ensuring the correct watchers observe the notification
Aug 23 10:18:09.541: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-9dl84,SelfLink:/api/v1/namespaces/e2e-tests-watch-9dl84/configmaps/e2e-watch-test-configmap-a,UID:eb720a7b-e529-11ea-a485-0242ac120004,ResourceVersion:1688187,Generation:0,CreationTimestamp:2020-08-23 10:17:59 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
Aug 23 10:18:09.541: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-9dl84,SelfLink:/api/v1/namespaces/e2e-tests-watch-9dl84/configmaps/e2e-watch-test-configmap-a,UID:eb720a7b-e529-11ea-a485-0242ac120004,ResourceVersion:1688187,Generation:0,CreationTimestamp:2020-08-23 10:17:59 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying configmap A again and ensuring the correct watchers observe the notification
Aug 23 10:18:19.689: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-9dl84,SelfLink:/api/v1/namespaces/e2e-tests-watch-9dl84/configmaps/e2e-watch-test-configmap-a,UID:eb720a7b-e529-11ea-a485-0242ac120004,ResourceVersion:1688208,Generation:0,CreationTimestamp:2020-08-23 10:17:59 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Aug 23 10:18:19.689: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-9dl84,SelfLink:/api/v1/namespaces/e2e-tests-watch-9dl84/configmaps/e2e-watch-test-configmap-a,UID:eb720a7b-e529-11ea-a485-0242ac120004,ResourceVersion:1688208,Generation:0,CreationTimestamp:2020-08-23 10:17:59 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
STEP: deleting configmap A and ensuring the correct watchers observe the notification
Aug 23 10:18:30.095: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-9dl84,SelfLink:/api/v1/namespaces/e2e-tests-watch-9dl84/configmaps/e2e-watch-test-configmap-a,UID:eb720a7b-e529-11ea-a485-0242ac120004,ResourceVersion:1688228,Generation:0,CreationTimestamp:2020-08-23 10:17:59 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Aug 23 10:18:30.096: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-9dl84,SelfLink:/api/v1/namespaces/e2e-tests-watch-9dl84/configmaps/e2e-watch-test-configmap-a,UID:eb720a7b-e529-11ea-a485-0242ac120004,ResourceVersion:1688228,Generation:0,CreationTimestamp:2020-08-23 10:17:59 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
STEP: creating a configmap with label B and ensuring the correct watchers observe the notification
Aug 23 10:18:40.102: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-9dl84,SelfLink:/api/v1/namespaces/e2e-tests-watch-9dl84/configmaps/e2e-watch-test-configmap-b,UID:03a86df0-e52a-11ea-a485-0242ac120004,ResourceVersion:1688247,Generation:0,CreationTimestamp:2020-08-23 10:18:40 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
Aug 23 10:18:40.102: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-9dl84,SelfLink:/api/v1/namespaces/e2e-tests-watch-9dl84/configmaps/e2e-watch-test-configmap-b,UID:03a86df0-e52a-11ea-a485-0242ac120004,ResourceVersion:1688247,Generation:0,CreationTimestamp:2020-08-23 10:18:40 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
STEP: deleting configmap B and ensuring the correct watchers observe the notification
Aug 23 10:18:50.109: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-9dl84,SelfLink:/api/v1/namespaces/e2e-tests-watch-9dl84/configmaps/e2e-watch-test-configmap-b,UID:03a86df0-e52a-11ea-a485-0242ac120004,ResourceVersion:1688267,Generation:0,CreationTimestamp:2020-08-23 10:18:40 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
Aug 23 10:18:50.109: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-9dl84,SelfLink:/api/v1/namespaces/e2e-tests-watch-9dl84/configmaps/e2e-watch-test-configmap-b,UID:03a86df0-e52a-11ea-a485-0242ac120004,ResourceVersion:1688267,Generation:0,CreationTimestamp:2020-08-23 10:18:40 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 23 10:19:00.110: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-watch-9dl84" for this suite.
Aug 23 10:19:06.161: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 23 10:19:06.227: INFO: namespace: e2e-tests-watch-9dl84, resource: bindings, ignored listing per whitelist
Aug 23 10:19:06.239: INFO: namespace e2e-tests-watch-9dl84 deletion completed in 6.125034153s

• [SLOW TEST:66.885 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-api-machinery] Watchers 
  should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 23 10:19:06.240: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating a watch on configmaps
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: closing the watch once it receives two notifications
Aug 23 10:19:06.392: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-kqknk,SelfLink:/api/v1/namespaces/e2e-tests-watch-kqknk/configmaps/e2e-watch-test-watch-closed,UID:134ecf64-e52a-11ea-a485-0242ac120004,ResourceVersion:1688307,Generation:0,CreationTimestamp:2020-08-23 10:19:06 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
Aug 23 10:19:06.393: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-kqknk,SelfLink:/api/v1/namespaces/e2e-tests-watch-kqknk/configmaps/e2e-watch-test-watch-closed,UID:134ecf64-e52a-11ea-a485-0242ac120004,ResourceVersion:1688308,Generation:0,CreationTimestamp:2020-08-23 10:19:06 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying the configmap a second time, while the watch is closed
STEP: creating a new watch on configmaps from the last resource version observed by the first watch
STEP: deleting the configmap
STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed
Aug 23 10:19:06.470: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-kqknk,SelfLink:/api/v1/namespaces/e2e-tests-watch-kqknk/configmaps/e2e-watch-test-watch-closed,UID:134ecf64-e52a-11ea-a485-0242ac120004,ResourceVersion:1688309,Generation:0,CreationTimestamp:2020-08-23 10:19:06 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Aug 23 10:19:06.470: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-kqknk,SelfLink:/api/v1/namespaces/e2e-tests-watch-kqknk/configmaps/e2e-watch-test-watch-closed,UID:134ecf64-e52a-11ea-a485-0242ac120004,ResourceVersion:1688310,Generation:0,CreationTimestamp:2020-08-23 10:19:06 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 23 10:19:06.470: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-watch-kqknk" for this suite.
Aug 23 10:19:12.512: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 23 10:19:12.563: INFO: namespace: e2e-tests-watch-kqknk, resource: bindings, ignored listing per whitelist
Aug 23 10:19:12.591: INFO: namespace e2e-tests-watch-kqknk deletion completed in 6.111485965s

• [SLOW TEST:6.351 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 23 10:19:12.592: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Aug 23 10:19:13.044: INFO: Waiting up to 5m0s for pod "downwardapi-volume-173ec39a-e52a-11ea-87d5-0242ac11000a" in namespace "e2e-tests-downward-api-lsm5l" to be "success or failure"
Aug 23 10:19:13.108: INFO: Pod "downwardapi-volume-173ec39a-e52a-11ea-87d5-0242ac11000a": Phase="Pending", Reason="", readiness=false. Elapsed: 63.777975ms
Aug 23 10:19:15.141: INFO: Pod "downwardapi-volume-173ec39a-e52a-11ea-87d5-0242ac11000a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.096439858s
Aug 23 10:19:17.153: INFO: Pod "downwardapi-volume-173ec39a-e52a-11ea-87d5-0242ac11000a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.10859679s
STEP: Saw pod success
Aug 23 10:19:17.153: INFO: Pod "downwardapi-volume-173ec39a-e52a-11ea-87d5-0242ac11000a" satisfied condition "success or failure"
Aug 23 10:19:17.156: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-173ec39a-e52a-11ea-87d5-0242ac11000a container client-container: 
STEP: delete the pod
Aug 23 10:19:17.191: INFO: Waiting for pod downwardapi-volume-173ec39a-e52a-11ea-87d5-0242ac11000a to disappear
Aug 23 10:19:17.216: INFO: Pod downwardapi-volume-173ec39a-e52a-11ea-87d5-0242ac11000a no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 23 10:19:17.216: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-lsm5l" for this suite.
Aug 23 10:19:23.237: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 23 10:19:23.283: INFO: namespace: e2e-tests-downward-api-lsm5l, resource: bindings, ignored listing per whitelist
Aug 23 10:19:23.338: INFO: namespace e2e-tests-downward-api-lsm5l deletion completed in 6.118080208s

• [SLOW TEST:10.746 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 23 10:19:23.338: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-1d8076b7-e52a-11ea-87d5-0242ac11000a
STEP: Creating a pod to test consume secrets
Aug 23 10:19:23.481: INFO: Waiting up to 5m0s for pod "pod-secrets-1d830fb7-e52a-11ea-87d5-0242ac11000a" in namespace "e2e-tests-secrets-hfhnm" to be "success or failure"
Aug 23 10:19:23.503: INFO: Pod "pod-secrets-1d830fb7-e52a-11ea-87d5-0242ac11000a": Phase="Pending", Reason="", readiness=false. Elapsed: 21.856725ms
Aug 23 10:19:25.506: INFO: Pod "pod-secrets-1d830fb7-e52a-11ea-87d5-0242ac11000a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02518901s
Aug 23 10:19:27.509: INFO: Pod "pod-secrets-1d830fb7-e52a-11ea-87d5-0242ac11000a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.028044447s
STEP: Saw pod success
Aug 23 10:19:27.509: INFO: Pod "pod-secrets-1d830fb7-e52a-11ea-87d5-0242ac11000a" satisfied condition "success or failure"
Aug 23 10:19:27.511: INFO: Trying to get logs from node hunter-worker2 pod pod-secrets-1d830fb7-e52a-11ea-87d5-0242ac11000a container secret-volume-test: 
STEP: delete the pod
Aug 23 10:19:27.850: INFO: Waiting for pod pod-secrets-1d830fb7-e52a-11ea-87d5-0242ac11000a to disappear
Aug 23 10:19:27.874: INFO: Pod pod-secrets-1d830fb7-e52a-11ea-87d5-0242ac11000a no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 23 10:19:27.874: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-hfhnm" for this suite.
Aug 23 10:19:33.901: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 23 10:19:33.955: INFO: namespace: e2e-tests-secrets-hfhnm, resource: bindings, ignored listing per whitelist
Aug 23 10:19:33.970: INFO: namespace e2e-tests-secrets-hfhnm deletion completed in 6.093309076s

• [SLOW TEST:10.632 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl logs 
  should be able to retrieve and filter logs  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 23 10:19:33.971: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1134
STEP: creating an rc
Aug 23 10:19:34.148: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-bjm58'
Aug 23 10:19:34.511: INFO: stderr: ""
Aug 23 10:19:34.511: INFO: stdout: "replicationcontroller/redis-master created\n"
[It] should be able to retrieve and filter logs  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Waiting for Redis master to start.
Aug 23 10:19:35.515: INFO: Selector matched 1 pods for map[app:redis]
Aug 23 10:19:35.515: INFO: Found 0 / 1
Aug 23 10:19:36.516: INFO: Selector matched 1 pods for map[app:redis]
Aug 23 10:19:36.516: INFO: Found 0 / 1
Aug 23 10:19:37.782: INFO: Selector matched 1 pods for map[app:redis]
Aug 23 10:19:37.782: INFO: Found 0 / 1
Aug 23 10:19:38.515: INFO: Selector matched 1 pods for map[app:redis]
Aug 23 10:19:38.515: INFO: Found 0 / 1
Aug 23 10:19:39.515: INFO: Selector matched 1 pods for map[app:redis]
Aug 23 10:19:39.515: INFO: Found 1 / 1
Aug 23 10:19:39.515: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Aug 23 10:19:39.519: INFO: Selector matched 1 pods for map[app:redis]
Aug 23 10:19:39.519: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
STEP: checking for a matching strings
Aug 23 10:19:39.519: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-v7vdh redis-master --namespace=e2e-tests-kubectl-bjm58'
Aug 23 10:19:39.626: INFO: stderr: ""
Aug 23 10:19:39.626: INFO: stdout: "                _._                                                  \n           _.-``__ ''-._                                             \n      _.-``    `.  `_.  ''-._           Redis 3.2.12 (35a5711f/0) 64 bit\n  .-`` .-```.  ```\\/    _.,_ ''-._                                   \n (    '      ,       .-`  | `,    )     Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379\n |    `-._   `._    /     _.-'    |     PID: 1\n  `-._    `-._  `-./  _.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |           http://redis.io        \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |                                  \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n      `-._    `-.__.-'    _.-'                                       \n          `-._        _.-'                                           \n              `-.__.-'                                               \n\n1:M 23 Aug 10:19:38.458 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 23 Aug 10:19:38.458 # Server started, Redis version 3.2.12\n1:M 23 Aug 10:19:38.458 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 23 Aug 10:19:38.458 * The server is now ready to accept connections on port 6379\n"
STEP: limiting log lines
Aug 23 10:19:39.627: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-v7vdh redis-master --namespace=e2e-tests-kubectl-bjm58 --tail=1'
Aug 23 10:19:39.732: INFO: stderr: ""
Aug 23 10:19:39.732: INFO: stdout: "1:M 23 Aug 10:19:38.458 * The server is now ready to accept connections on port 6379\n"
STEP: limiting log bytes
Aug 23 10:19:39.732: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-v7vdh redis-master --namespace=e2e-tests-kubectl-bjm58 --limit-bytes=1'
Aug 23 10:19:39.849: INFO: stderr: ""
Aug 23 10:19:39.849: INFO: stdout: " "
STEP: exposing timestamps
Aug 23 10:19:39.849: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-v7vdh redis-master --namespace=e2e-tests-kubectl-bjm58 --tail=1 --timestamps'
Aug 23 10:19:39.977: INFO: stderr: ""
Aug 23 10:19:39.977: INFO: stdout: "2020-08-23T10:19:38.458986752Z 1:M 23 Aug 10:19:38.458 * The server is now ready to accept connections on port 6379\n"
STEP: restricting to a time range
Aug 23 10:19:42.477: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-v7vdh redis-master --namespace=e2e-tests-kubectl-bjm58 --since=1s'
Aug 23 10:19:42.586: INFO: stderr: ""
Aug 23 10:19:42.586: INFO: stdout: ""
Aug 23 10:19:42.586: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-v7vdh redis-master --namespace=e2e-tests-kubectl-bjm58 --since=24h'
Aug 23 10:19:42.696: INFO: stderr: ""
Aug 23 10:19:42.696: INFO: stdout: "                _._                                                  \n           _.-``__ ''-._                                             \n      _.-``    `.  `_.  ''-._           Redis 3.2.12 (35a5711f/0) 64 bit\n  .-`` .-```.  ```\\/    _.,_ ''-._                                   \n (    '      ,       .-`  | `,    )     Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379\n |    `-._   `._    /     _.-'    |     PID: 1\n  `-._    `-._  `-./  _.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |           http://redis.io        \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |                                  \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n      `-._    `-.__.-'    _.-'                                       \n          `-._        _.-'                                           \n              `-.__.-'                                               \n\n1:M 23 Aug 10:19:38.458 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 23 Aug 10:19:38.458 # Server started, Redis version 3.2.12\n1:M 23 Aug 10:19:38.458 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 23 Aug 10:19:38.458 * The server is now ready to accept connections on port 6379\n"
[AfterEach] [k8s.io] Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1140
STEP: using delete to clean up resources
Aug 23 10:19:42.696: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-bjm58'
Aug 23 10:19:42.806: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Aug 23 10:19:42.806: INFO: stdout: "replicationcontroller \"redis-master\" force deleted\n"
Aug 23 10:19:42.806: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=nginx --no-headers --namespace=e2e-tests-kubectl-bjm58'
Aug 23 10:19:42.908: INFO: stderr: "No resources found.\n"
Aug 23 10:19:42.909: INFO: stdout: ""
Aug 23 10:19:42.909: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=nginx --namespace=e2e-tests-kubectl-bjm58 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Aug 23 10:19:42.997: INFO: stderr: ""
Aug 23 10:19:42.998: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 23 10:19:42.998: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-bjm58" for this suite.
Aug 23 10:20:05.059: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 23 10:20:05.131: INFO: namespace: e2e-tests-kubectl-bjm58, resource: bindings, ignored listing per whitelist
Aug 23 10:20:05.138: INFO: namespace e2e-tests-kubectl-bjm58 deletion completed in 22.136792566s

• [SLOW TEST:31.168 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should be able to retrieve and filter logs  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 23 10:20:05.139: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Aug 23 10:20:05.428: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"3675260d-e52a-11ea-a485-0242ac120004", Controller:(*bool)(0xc0014740e2), BlockOwnerDeletion:(*bool)(0xc0014740e3)}}
Aug 23 10:20:05.436: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"3670f18b-e52a-11ea-a485-0242ac120004", Controller:(*bool)(0xc001bea7b2), BlockOwnerDeletion:(*bool)(0xc001bea7b3)}}
Aug 23 10:20:05.491: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"367459da-e52a-11ea-a485-0242ac120004", Controller:(*bool)(0xc001f165d6), BlockOwnerDeletion:(*bool)(0xc001f165d7)}}
[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 23 10:20:10.553: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-gc-qxmhr" for this suite.
Aug 23 10:20:16.911: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 23 10:20:17.112: INFO: namespace: e2e-tests-gc-qxmhr, resource: bindings, ignored listing per whitelist
Aug 23 10:20:17.127: INFO: namespace e2e-tests-gc-qxmhr deletion completed in 6.571087861s

• [SLOW TEST:11.988 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[k8s.io] [sig-node] Events 
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] [sig-node] Events
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 23 10:20:17.128: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename events
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: retrieving the pod
Aug 23 10:20:25.606: INFO: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:send-events-3db575ea-e52a-11ea-87d5-0242ac11000a,GenerateName:,Namespace:e2e-tests-events-zbftk,SelfLink:/api/v1/namespaces/e2e-tests-events-zbftk/pods/send-events-3db575ea-e52a-11ea-87d5-0242ac11000a,UID:3db897c7-e52a-11ea-a485-0242ac120004,ResourceVersion:1688610,Generation:0,CreationTimestamp:2020-08-23 10:20:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: foo,time: 491103847,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-fwzt7 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-fwzt7,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{p gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 [] []  [{ 0 80 TCP }] [] [] {map[] map[]} [{default-token-fwzt7 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0010c88e0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0010c8900}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-23 10:20:17 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-08-23 10:20:24 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-08-23 10:20:24 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-23 10:20:17 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.8,PodIP:10.244.2.121,StartTime:2020-08-23 10:20:17 +0000 UTC,ContainerStatuses:[{p {nil ContainerStateRunning{StartedAt:2020-08-23 10:20:23 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 gcr.io/kubernetes-e2e-test-images/serve-hostname@sha256:bab70473a6d8ef65a22625dc9a1b0f0452e811530fdbe77e4408523460177ff1 containerd://b85feb3087f9859875826a4861040d5759bf5259823cd0c502bde3cfd7f30818}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}

STEP: checking for scheduler event about the pod
Aug 23 10:20:27.610: INFO: Saw scheduler event for our pod.
STEP: checking for kubelet event about the pod
Aug 23 10:20:29.626: INFO: Saw kubelet event for our pod.
STEP: deleting the pod
[AfterEach] [k8s.io] [sig-node] Events
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 23 10:20:29.630: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-events-zbftk" for this suite.
Aug 23 10:21:09.687: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 23 10:21:09.712: INFO: namespace: e2e-tests-events-zbftk, resource: bindings, ignored listing per whitelist
Aug 23 10:21:09.818: INFO: namespace e2e-tests-events-zbftk deletion completed in 40.146806476s

• [SLOW TEST:52.690 seconds]
[k8s.io] [sig-node] Events
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 23 10:21:09.818: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79
Aug 23 10:21:10.137: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Aug 23 10:21:10.194: INFO: Waiting for terminating namespaces to be deleted...
Aug 23 10:21:10.196: INFO: 
Logging pods the kubelet thinks is on node hunter-worker before test
Aug 23 10:21:10.201: INFO: kindnet-kvcmt from kube-system started at 2020-08-15 09:33:02 +0000 UTC (1 container statuses recorded)
Aug 23 10:21:10.201: INFO: 	Container kindnet-cni ready: true, restart count 0
Aug 23 10:21:10.201: INFO: kube-proxy-xm64c from kube-system started at 2020-08-15 09:32:58 +0000 UTC (1 container statuses recorded)
Aug 23 10:21:10.201: INFO: 	Container kube-proxy ready: true, restart count 0
Aug 23 10:21:10.201: INFO: 
Logging pods the kubelet thinks is on node hunter-worker2 before test
Aug 23 10:21:10.206: INFO: kube-proxy-7x47x from kube-system started at 2020-08-15 09:33:02 +0000 UTC (1 container statuses recorded)
Aug 23 10:21:10.206: INFO: 	Container kube-proxy ready: true, restart count 0
Aug 23 10:21:10.206: INFO: kindnet-l4sc5 from kube-system started at 2020-08-15 09:33:02 +0000 UTC (1 container statuses recorded)
Aug 23 10:21:10.206: INFO: 	Container kindnet-cni ready: true, restart count 0
[It] validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: verifying the node has the label node hunter-worker
STEP: verifying the node has the label node hunter-worker2
Aug 23 10:21:10.590: INFO: Pod kindnet-kvcmt requesting resource cpu=100m on Node hunter-worker
Aug 23 10:21:10.590: INFO: Pod kindnet-l4sc5 requesting resource cpu=100m on Node hunter-worker2
Aug 23 10:21:10.590: INFO: Pod kube-proxy-7x47x requesting resource cpu=0m on Node hunter-worker2
Aug 23 10:21:10.590: INFO: Pod kube-proxy-xm64c requesting resource cpu=0m on Node hunter-worker
STEP: Starting Pods to consume most of the cluster CPU.
STEP: Creating another pod that requires unavailable amount of CPU.
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-5d5bc21b-e52a-11ea-87d5-0242ac11000a.162dde7ce4195ba8], Reason = [Scheduled], Message = [Successfully assigned e2e-tests-sched-pred-ft5v9/filler-pod-5d5bc21b-e52a-11ea-87d5-0242ac11000a to hunter-worker]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-5d5bc21b-e52a-11ea-87d5-0242ac11000a.162dde7d8bdb3724], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-5d5bc21b-e52a-11ea-87d5-0242ac11000a.162dde7e4e00f3e1], Reason = [Created], Message = [Created container]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-5d5bc21b-e52a-11ea-87d5-0242ac11000a.162dde7e666159c2], Reason = [Started], Message = [Started container]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-5d5c8ffc-e52a-11ea-87d5-0242ac11000a.162dde7cf03ec1a1], Reason = [Scheduled], Message = [Successfully assigned e2e-tests-sched-pred-ft5v9/filler-pod-5d5c8ffc-e52a-11ea-87d5-0242ac11000a to hunter-worker2]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-5d5c8ffc-e52a-11ea-87d5-0242ac11000a.162dde7d8e9c173b], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-5d5c8ffc-e52a-11ea-87d5-0242ac11000a.162dde7e2b92e5d9], Reason = [Created], Message = [Created container]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-5d5c8ffc-e52a-11ea-87d5-0242ac11000a.162dde7e3c19df05], Reason = [Started], Message = [Started container]
STEP: Considering event: 
Type = [Warning], Name = [additional-pod.162dde7ecec58144], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taints that the pod didn't tolerate, 2 Insufficient cpu.]
STEP: removing the label node off the node hunter-worker2
STEP: verifying the node doesn't have the label node
STEP: removing the label node off the node hunter-worker
STEP: verifying the node doesn't have the label node
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 23 10:21:20.502: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-sched-pred-ft5v9" for this suite.
Aug 23 10:21:26.914: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 23 10:21:26.948: INFO: namespace: e2e-tests-sched-pred-ft5v9, resource: bindings, ignored listing per whitelist
Aug 23 10:21:26.990: INFO: namespace e2e-tests-sched-pred-ft5v9 deletion completed in 6.439698311s
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70

• [SLOW TEST:17.172 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22
  validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-api-machinery] Watchers 
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 23 10:21:26.990: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating a watch on configmaps with a certain label
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: changing the label value of the configmap
STEP: Expecting to observe a delete notification for the watched object
Aug 23 10:21:27.524: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-5vmzz,SelfLink:/api/v1/namespaces/e2e-tests-watch-5vmzz/configmaps/e2e-watch-test-label-changed,UID:67607c15-e52a-11ea-a485-0242ac120004,ResourceVersion:1688796,Generation:0,CreationTimestamp:2020-08-23 10:21:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
Aug 23 10:21:27.524: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-5vmzz,SelfLink:/api/v1/namespaces/e2e-tests-watch-5vmzz/configmaps/e2e-watch-test-label-changed,UID:67607c15-e52a-11ea-a485-0242ac120004,ResourceVersion:1688797,Generation:0,CreationTimestamp:2020-08-23 10:21:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
Aug 23 10:21:27.525: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-5vmzz,SelfLink:/api/v1/namespaces/e2e-tests-watch-5vmzz/configmaps/e2e-watch-test-label-changed,UID:67607c15-e52a-11ea-a485-0242ac120004,ResourceVersion:1688799,Generation:0,CreationTimestamp:2020-08-23 10:21:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying the configmap a second time
STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements
STEP: changing the label value of the configmap back
STEP: modifying the configmap a third time
STEP: deleting the configmap
STEP: Expecting to observe an add notification for the watched object when the label value was restored
Aug 23 10:21:37.581: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-5vmzz,SelfLink:/api/v1/namespaces/e2e-tests-watch-5vmzz/configmaps/e2e-watch-test-label-changed,UID:67607c15-e52a-11ea-a485-0242ac120004,ResourceVersion:1688820,Generation:0,CreationTimestamp:2020-08-23 10:21:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Aug 23 10:21:37.581: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-5vmzz,SelfLink:/api/v1/namespaces/e2e-tests-watch-5vmzz/configmaps/e2e-watch-test-label-changed,UID:67607c15-e52a-11ea-a485-0242ac120004,ResourceVersion:1688821,Generation:0,CreationTimestamp:2020-08-23 10:21:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},}
Aug 23 10:21:37.581: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-5vmzz,SelfLink:/api/v1/namespaces/e2e-tests-watch-5vmzz/configmaps/e2e-watch-test-label-changed,UID:67607c15-e52a-11ea-a485-0242ac120004,ResourceVersion:1688822,Generation:0,CreationTimestamp:2020-08-23 10:21:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 23 10:21:37.581: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-watch-5vmzz" for this suite.
Aug 23 10:21:43.740: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 23 10:21:43.752: INFO: namespace: e2e-tests-watch-5vmzz, resource: bindings, ignored listing per whitelist
Aug 23 10:21:43.814: INFO: namespace e2e-tests-watch-5vmzz deletion completed in 6.228977956s

• [SLOW TEST:16.824 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-node] ConfigMap 
  should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 23 10:21:43.814: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap e2e-tests-configmap-bsxkj/configmap-test-71457c4b-e52a-11ea-87d5-0242ac11000a
STEP: Creating a pod to test consume configMaps
Aug 23 10:21:44.171: INFO: Waiting up to 5m0s for pod "pod-configmaps-715f062d-e52a-11ea-87d5-0242ac11000a" in namespace "e2e-tests-configmap-bsxkj" to be "success or failure"
Aug 23 10:21:44.369: INFO: Pod "pod-configmaps-715f062d-e52a-11ea-87d5-0242ac11000a": Phase="Pending", Reason="", readiness=false. Elapsed: 198.780479ms
Aug 23 10:21:46.373: INFO: Pod "pod-configmaps-715f062d-e52a-11ea-87d5-0242ac11000a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.20253874s
Aug 23 10:21:48.393: INFO: Pod "pod-configmaps-715f062d-e52a-11ea-87d5-0242ac11000a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.222672016s
Aug 23 10:21:50.397: INFO: Pod "pod-configmaps-715f062d-e52a-11ea-87d5-0242ac11000a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.226872234s
STEP: Saw pod success
Aug 23 10:21:50.397: INFO: Pod "pod-configmaps-715f062d-e52a-11ea-87d5-0242ac11000a" satisfied condition "success or failure"
Aug 23 10:21:50.401: INFO: Trying to get logs from node hunter-worker pod pod-configmaps-715f062d-e52a-11ea-87d5-0242ac11000a container env-test: 
STEP: delete the pod
Aug 23 10:21:50.425: INFO: Waiting for pod pod-configmaps-715f062d-e52a-11ea-87d5-0242ac11000a to disappear
Aug 23 10:21:50.430: INFO: Pod pod-configmaps-715f062d-e52a-11ea-87d5-0242ac11000a no longer exists
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 23 10:21:50.430: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-bsxkj" for this suite.
Aug 23 10:21:58.843: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 23 10:21:58.901: INFO: namespace: e2e-tests-configmap-bsxkj, resource: bindings, ignored listing per whitelist
Aug 23 10:21:58.908: INFO: namespace e2e-tests-configmap-bsxkj deletion completed in 8.474448183s

• [SLOW TEST:15.094 seconds]
[sig-node] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31
  should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-node] ConfigMap 
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 23 10:21:58.908: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap e2e-tests-configmap-g7sn2/configmap-test-7a88d548-e52a-11ea-87d5-0242ac11000a
STEP: Creating a pod to test consume configMaps
Aug 23 10:21:59.587: INFO: Waiting up to 5m0s for pod "pod-configmaps-7a89a4fd-e52a-11ea-87d5-0242ac11000a" in namespace "e2e-tests-configmap-g7sn2" to be "success or failure"
Aug 23 10:21:59.597: INFO: Pod "pod-configmaps-7a89a4fd-e52a-11ea-87d5-0242ac11000a": Phase="Pending", Reason="", readiness=false. Elapsed: 9.63701ms
Aug 23 10:22:01.819: INFO: Pod "pod-configmaps-7a89a4fd-e52a-11ea-87d5-0242ac11000a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.231488453s
Aug 23 10:22:03.822: INFO: Pod "pod-configmaps-7a89a4fd-e52a-11ea-87d5-0242ac11000a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.235131961s
Aug 23 10:22:05.830: INFO: Pod "pod-configmaps-7a89a4fd-e52a-11ea-87d5-0242ac11000a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.243277835s
STEP: Saw pod success
Aug 23 10:22:05.831: INFO: Pod "pod-configmaps-7a89a4fd-e52a-11ea-87d5-0242ac11000a" satisfied condition "success or failure"
Aug 23 10:22:05.833: INFO: Trying to get logs from node hunter-worker2 pod pod-configmaps-7a89a4fd-e52a-11ea-87d5-0242ac11000a container env-test: 
STEP: delete the pod
Aug 23 10:22:05.929: INFO: Waiting for pod pod-configmaps-7a89a4fd-e52a-11ea-87d5-0242ac11000a to disappear
Aug 23 10:22:06.130: INFO: Pod pod-configmaps-7a89a4fd-e52a-11ea-87d5-0242ac11000a no longer exists
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 23 10:22:06.130: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-g7sn2" for this suite.
Aug 23 10:22:12.284: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 23 10:22:12.356: INFO: namespace: e2e-tests-configmap-g7sn2, resource: bindings, ignored listing per whitelist
Aug 23 10:22:12.360: INFO: namespace e2e-tests-configmap-g7sn2 deletion completed in 6.226979501s

• [SLOW TEST:13.452 seconds]
[sig-node] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[k8s.io] Pods 
  should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 23 10:22:12.360: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating pod
Aug 23 10:22:16.523: INFO: Pod pod-hostip-823a0561-e52a-11ea-87d5-0242ac11000a has hostIP: 172.18.0.2
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 23 10:22:16.523: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-k6dgx" for this suite.
Aug 23 10:22:38.536: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 23 10:22:38.546: INFO: namespace: e2e-tests-pods-k6dgx, resource: bindings, ignored listing per whitelist
Aug 23 10:22:38.598: INFO: namespace e2e-tests-pods-k6dgx deletion completed in 22.072549017s

• [SLOW TEST:26.238 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 23 10:22:38.598: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Aug 23 10:22:38.748: INFO: Waiting up to 5m0s for pod "downwardapi-volume-91e22643-e52a-11ea-87d5-0242ac11000a" in namespace "e2e-tests-downward-api-ghqvp" to be "success or failure"
Aug 23 10:22:38.775: INFO: Pod "downwardapi-volume-91e22643-e52a-11ea-87d5-0242ac11000a": Phase="Pending", Reason="", readiness=false. Elapsed: 26.026748ms
Aug 23 10:22:40.778: INFO: Pod "downwardapi-volume-91e22643-e52a-11ea-87d5-0242ac11000a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029019274s
Aug 23 10:22:42.781: INFO: Pod "downwardapi-volume-91e22643-e52a-11ea-87d5-0242ac11000a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.032586258s
STEP: Saw pod success
Aug 23 10:22:42.781: INFO: Pod "downwardapi-volume-91e22643-e52a-11ea-87d5-0242ac11000a" satisfied condition "success or failure"
Aug 23 10:22:42.784: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-91e22643-e52a-11ea-87d5-0242ac11000a container client-container: 
STEP: delete the pod
Aug 23 10:22:42.831: INFO: Waiting for pod downwardapi-volume-91e22643-e52a-11ea-87d5-0242ac11000a to disappear
Aug 23 10:22:42.838: INFO: Pod downwardapi-volume-91e22643-e52a-11ea-87d5-0242ac11000a no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 23 10:22:42.838: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-ghqvp" for this suite.
Aug 23 10:22:48.847: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 23 10:22:48.928: INFO: namespace: e2e-tests-downward-api-ghqvp, resource: bindings, ignored listing per whitelist
Aug 23 10:22:48.933: INFO: namespace e2e-tests-downward-api-ghqvp deletion completed in 6.092582876s

• [SLOW TEST:10.335 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run pod 
  should create a pod from an image when restart is Never  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 23 10:22:48.934: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1527
[It] should create a pod from an image when restart is Never  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Aug 23 10:22:49.077: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=e2e-tests-kubectl-tg6lp'
Aug 23 10:22:49.174: INFO: stderr: ""
Aug 23 10:22:49.175: INFO: stdout: "pod/e2e-test-nginx-pod created\n"
STEP: verifying the pod e2e-test-nginx-pod was created
[AfterEach] [k8s.io] Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1532
Aug 23 10:22:49.214: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=e2e-tests-kubectl-tg6lp'
Aug 23 10:22:58.143: INFO: stderr: ""
Aug 23 10:22:58.143: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 23 10:22:58.143: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-tg6lp" for this suite.
Aug 23 10:23:04.163: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 23 10:23:04.214: INFO: namespace: e2e-tests-kubectl-tg6lp, resource: bindings, ignored listing per whitelist
Aug 23 10:23:04.214: INFO: namespace e2e-tests-kubectl-tg6lp deletion completed in 6.058807561s

• [SLOW TEST:15.281 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create a pod from an image when restart is Never  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 23 10:23:04.215: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Aug 23 10:23:04.329: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a122206b-e52a-11ea-87d5-0242ac11000a" in namespace "e2e-tests-downward-api-4bs7p" to be "success or failure"
Aug 23 10:23:04.341: INFO: Pod "downwardapi-volume-a122206b-e52a-11ea-87d5-0242ac11000a": Phase="Pending", Reason="", readiness=false. Elapsed: 11.84143ms
Aug 23 10:23:06.344: INFO: Pod "downwardapi-volume-a122206b-e52a-11ea-87d5-0242ac11000a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015077211s
Aug 23 10:23:08.347: INFO: Pod "downwardapi-volume-a122206b-e52a-11ea-87d5-0242ac11000a": Phase="Running", Reason="", readiness=true. Elapsed: 4.018198987s
Aug 23 10:23:10.350: INFO: Pod "downwardapi-volume-a122206b-e52a-11ea-87d5-0242ac11000a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.021020914s
STEP: Saw pod success
Aug 23 10:23:10.350: INFO: Pod "downwardapi-volume-a122206b-e52a-11ea-87d5-0242ac11000a" satisfied condition "success or failure"
Aug 23 10:23:10.352: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-a122206b-e52a-11ea-87d5-0242ac11000a container client-container: 
STEP: delete the pod
Aug 23 10:23:10.397: INFO: Waiting for pod downwardapi-volume-a122206b-e52a-11ea-87d5-0242ac11000a to disappear
Aug 23 10:23:10.477: INFO: Pod downwardapi-volume-a122206b-e52a-11ea-87d5-0242ac11000a no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 23 10:23:10.477: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-4bs7p" for this suite.
Aug 23 10:23:16.559: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 23 10:23:16.612: INFO: namespace: e2e-tests-downward-api-4bs7p, resource: bindings, ignored listing per whitelist
Aug 23 10:23:16.620: INFO: namespace e2e-tests-downward-api-4bs7p deletion completed in 6.140251594s

• [SLOW TEST:12.406 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with downward pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 23 10:23:16.621: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with downward pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod pod-subpath-test-downwardapi-5qg5
STEP: Creating a pod to test atomic-volume-subpath
Aug 23 10:23:16.796: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-5qg5" in namespace "e2e-tests-subpath-9scv6" to be "success or failure"
Aug 23 10:23:16.855: INFO: Pod "pod-subpath-test-downwardapi-5qg5": Phase="Pending", Reason="", readiness=false. Elapsed: 58.344911ms
Aug 23 10:23:18.891: INFO: Pod "pod-subpath-test-downwardapi-5qg5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.094251571s
Aug 23 10:23:20.908: INFO: Pod "pod-subpath-test-downwardapi-5qg5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.111868608s
Aug 23 10:23:22.951: INFO: Pod "pod-subpath-test-downwardapi-5qg5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.154331862s
Aug 23 10:23:24.954: INFO: Pod "pod-subpath-test-downwardapi-5qg5": Phase="Running", Reason="", readiness=false. Elapsed: 8.157412866s
Aug 23 10:23:26.957: INFO: Pod "pod-subpath-test-downwardapi-5qg5": Phase="Running", Reason="", readiness=false. Elapsed: 10.161016447s
Aug 23 10:23:28.961: INFO: Pod "pod-subpath-test-downwardapi-5qg5": Phase="Running", Reason="", readiness=false. Elapsed: 12.164355157s
Aug 23 10:23:30.964: INFO: Pod "pod-subpath-test-downwardapi-5qg5": Phase="Running", Reason="", readiness=false. Elapsed: 14.167686887s
Aug 23 10:23:32.968: INFO: Pod "pod-subpath-test-downwardapi-5qg5": Phase="Running", Reason="", readiness=false. Elapsed: 16.17122624s
Aug 23 10:23:35.274: INFO: Pod "pod-subpath-test-downwardapi-5qg5": Phase="Running", Reason="", readiness=false. Elapsed: 18.477443618s
Aug 23 10:23:37.278: INFO: Pod "pod-subpath-test-downwardapi-5qg5": Phase="Running", Reason="", readiness=false. Elapsed: 20.481357293s
Aug 23 10:23:39.281: INFO: Pod "pod-subpath-test-downwardapi-5qg5": Phase="Running", Reason="", readiness=false. Elapsed: 22.484735837s
Aug 23 10:23:41.285: INFO: Pod "pod-subpath-test-downwardapi-5qg5": Phase="Running", Reason="", readiness=false. Elapsed: 24.48823683s
Aug 23 10:23:43.289: INFO: Pod "pod-subpath-test-downwardapi-5qg5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.492210635s
STEP: Saw pod success
Aug 23 10:23:43.289: INFO: Pod "pod-subpath-test-downwardapi-5qg5" satisfied condition "success or failure"
Aug 23 10:23:43.290: INFO: Trying to get logs from node hunter-worker2 pod pod-subpath-test-downwardapi-5qg5 container test-container-subpath-downwardapi-5qg5: 
STEP: delete the pod
Aug 23 10:23:43.423: INFO: Waiting for pod pod-subpath-test-downwardapi-5qg5 to disappear
Aug 23 10:23:43.444: INFO: Pod pod-subpath-test-downwardapi-5qg5 no longer exists
STEP: Deleting pod pod-subpath-test-downwardapi-5qg5
Aug 23 10:23:43.444: INFO: Deleting pod "pod-subpath-test-downwardapi-5qg5" in namespace "e2e-tests-subpath-9scv6"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 23 10:23:43.446: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-subpath-9scv6" for this suite.
Aug 23 10:23:51.465: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 23 10:23:51.475: INFO: namespace: e2e-tests-subpath-9scv6, resource: bindings, ignored listing per whitelist
Aug 23 10:23:51.528: INFO: namespace e2e-tests-subpath-9scv6 deletion completed in 8.079531898s

• [SLOW TEST:34.908 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with downward pod [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 23 10:23:51.529: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102
[It] should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a simple DaemonSet "daemon-set"
STEP: Check that daemon pods launch on every node of the cluster.
Aug 23 10:23:52.528: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 23 10:23:52.530: INFO: Number of nodes with available pods: 0
Aug 23 10:23:52.530: INFO: Node hunter-worker is running more than one daemon pod
Aug 23 10:23:54.132: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 23 10:23:54.224: INFO: Number of nodes with available pods: 0
Aug 23 10:23:54.224: INFO: Node hunter-worker is running more than one daemon pod
Aug 23 10:23:54.533: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 23 10:23:54.558: INFO: Number of nodes with available pods: 0
Aug 23 10:23:54.558: INFO: Node hunter-worker is running more than one daemon pod
Aug 23 10:23:55.534: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 23 10:23:55.536: INFO: Number of nodes with available pods: 0
Aug 23 10:23:55.537: INFO: Node hunter-worker is running more than one daemon pod
Aug 23 10:23:56.535: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 23 10:23:56.538: INFO: Number of nodes with available pods: 0
Aug 23 10:23:56.538: INFO: Node hunter-worker is running more than one daemon pod
Aug 23 10:23:57.719: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 23 10:23:57.722: INFO: Number of nodes with available pods: 0
Aug 23 10:23:57.722: INFO: Node hunter-worker is running more than one daemon pod
Aug 23 10:23:58.760: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 23 10:23:58.834: INFO: Number of nodes with available pods: 0
Aug 23 10:23:58.834: INFO: Node hunter-worker is running more than one daemon pod
Aug 23 10:23:59.947: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 23 10:23:59.950: INFO: Number of nodes with available pods: 0
Aug 23 10:23:59.950: INFO: Node hunter-worker is running more than one daemon pod
Aug 23 10:24:00.599: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 23 10:24:00.603: INFO: Number of nodes with available pods: 0
Aug 23 10:24:00.603: INFO: Node hunter-worker is running more than one daemon pod
Aug 23 10:24:01.869: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 23 10:24:01.873: INFO: Number of nodes with available pods: 1
Aug 23 10:24:01.873: INFO: Node hunter-worker is running more than one daemon pod
Aug 23 10:24:02.534: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 23 10:24:02.536: INFO: Number of nodes with available pods: 2
Aug 23 10:24:02.536: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived.
Aug 23 10:24:02.754: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 23 10:24:02.757: INFO: Number of nodes with available pods: 2
Aug 23 10:24:02.757: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Wait for the failed daemon pod to be completely deleted.
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-n5ccr, will wait for the garbage collector to delete the pods
Aug 23 10:24:04.608: INFO: Deleting DaemonSet.extensions daemon-set took: 323.312357ms
Aug 23 10:24:05.508: INFO: Terminating DaemonSet.extensions daemon-set pods took: 900.239914ms
Aug 23 10:24:18.411: INFO: Number of nodes with available pods: 0
Aug 23 10:24:18.411: INFO: Number of running nodes: 0, number of available pods: 0
Aug 23 10:24:18.414: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-n5ccr/daemonsets","resourceVersion":"1689374"},"items":null}

Aug 23 10:24:18.415: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-n5ccr/pods","resourceVersion":"1689374"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 23 10:24:18.423: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-daemonsets-n5ccr" for this suite.
Aug 23 10:24:24.438: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 23 10:24:24.453: INFO: namespace: e2e-tests-daemonsets-n5ccr, resource: bindings, ignored listing per whitelist
Aug 23 10:24:24.511: INFO: namespace e2e-tests-daemonsets-n5ccr deletion completed in 6.084766592s

• [SLOW TEST:32.983 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[k8s.io] KubeletManagedEtcHosts 
  should test kubelet managed /etc/hosts file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] KubeletManagedEtcHosts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 23 10:24:24.511: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should test kubelet managed /etc/hosts file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Setting up the test
STEP: Creating hostNetwork=false pod
STEP: Creating hostNetwork=true pod
STEP: Running the test
STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false
Aug 23 10:24:36.811: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-rwws4 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 23 10:24:36.811: INFO: >>> kubeConfig: /root/.kube/config
I0823 10:24:36.836331       6 log.go:172] (0xc002554210) (0xc00120b040) Create stream
I0823 10:24:36.836369       6 log.go:172] (0xc002554210) (0xc00120b040) Stream added, broadcasting: 1
I0823 10:24:36.838672       6 log.go:172] (0xc002554210) Reply frame received for 1
I0823 10:24:36.838721       6 log.go:172] (0xc002554210) (0xc0013cbc20) Create stream
I0823 10:24:36.838735       6 log.go:172] (0xc002554210) (0xc0013cbc20) Stream added, broadcasting: 3
I0823 10:24:36.839605       6 log.go:172] (0xc002554210) Reply frame received for 3
I0823 10:24:36.839638       6 log.go:172] (0xc002554210) (0xc001bd65a0) Create stream
I0823 10:24:36.839646       6 log.go:172] (0xc002554210) (0xc001bd65a0) Stream added, broadcasting: 5
I0823 10:24:36.840559       6 log.go:172] (0xc002554210) Reply frame received for 5
I0823 10:24:36.903286       6 log.go:172] (0xc002554210) Data frame received for 5
I0823 10:24:36.903310       6 log.go:172] (0xc001bd65a0) (5) Data frame handling
I0823 10:24:36.903332       6 log.go:172] (0xc002554210) Data frame received for 3
I0823 10:24:36.903352       6 log.go:172] (0xc0013cbc20) (3) Data frame handling
I0823 10:24:36.903369       6 log.go:172] (0xc0013cbc20) (3) Data frame sent
I0823 10:24:36.903378       6 log.go:172] (0xc002554210) Data frame received for 3
I0823 10:24:36.903383       6 log.go:172] (0xc0013cbc20) (3) Data frame handling
I0823 10:24:36.904372       6 log.go:172] (0xc002554210) Data frame received for 1
I0823 10:24:36.904398       6 log.go:172] (0xc00120b040) (1) Data frame handling
I0823 10:24:36.904411       6 log.go:172] (0xc00120b040) (1) Data frame sent
I0823 10:24:36.904418       6 log.go:172] (0xc002554210) (0xc00120b040) Stream removed, broadcasting: 1
I0823 10:24:36.904428       6 log.go:172] (0xc002554210) Go away received
I0823 10:24:36.904510       6 log.go:172] (0xc002554210) (0xc00120b040) Stream removed, broadcasting: 1
I0823 10:24:36.904528       6 log.go:172] (0xc002554210) (0xc0013cbc20) Stream removed, broadcasting: 3
I0823 10:24:36.904545       6 log.go:172] (0xc002554210) (0xc001bd65a0) Stream removed, broadcasting: 5
Aug 23 10:24:36.904: INFO: Exec stderr: ""
Aug 23 10:24:36.904: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-rwws4 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 23 10:24:36.904: INFO: >>> kubeConfig: /root/.kube/config
I0823 10:24:36.923847       6 log.go:172] (0xc0025546e0) (0xc00120b2c0) Create stream
I0823 10:24:36.923874       6 log.go:172] (0xc0025546e0) (0xc00120b2c0) Stream added, broadcasting: 1
I0823 10:24:36.925544       6 log.go:172] (0xc0025546e0) Reply frame received for 1
I0823 10:24:36.925579       6 log.go:172] (0xc0025546e0) (0xc0015e1e00) Create stream
I0823 10:24:36.925588       6 log.go:172] (0xc0025546e0) (0xc0015e1e00) Stream added, broadcasting: 3
I0823 10:24:36.926197       6 log.go:172] (0xc0025546e0) Reply frame received for 3
I0823 10:24:36.926223       6 log.go:172] (0xc0025546e0) (0xc001bd6640) Create stream
I0823 10:24:36.926234       6 log.go:172] (0xc0025546e0) (0xc001bd6640) Stream added, broadcasting: 5
I0823 10:24:36.926872       6 log.go:172] (0xc0025546e0) Reply frame received for 5
I0823 10:24:36.985828       6 log.go:172] (0xc0025546e0) Data frame received for 5
I0823 10:24:36.985857       6 log.go:172] (0xc001bd6640) (5) Data frame handling
I0823 10:24:36.985926       6 log.go:172] (0xc0025546e0) Data frame received for 3
I0823 10:24:36.985987       6 log.go:172] (0xc0015e1e00) (3) Data frame handling
I0823 10:24:36.986013       6 log.go:172] (0xc0015e1e00) (3) Data frame sent
I0823 10:24:36.986035       6 log.go:172] (0xc0025546e0) Data frame received for 3
I0823 10:24:36.986053       6 log.go:172] (0xc0015e1e00) (3) Data frame handling
I0823 10:24:36.987334       6 log.go:172] (0xc0025546e0) Data frame received for 1
I0823 10:24:36.987417       6 log.go:172] (0xc00120b2c0) (1) Data frame handling
I0823 10:24:36.987485       6 log.go:172] (0xc00120b2c0) (1) Data frame sent
I0823 10:24:36.987530       6 log.go:172] (0xc0025546e0) (0xc00120b2c0) Stream removed, broadcasting: 1
I0823 10:24:36.987556       6 log.go:172] (0xc0025546e0) Go away received
I0823 10:24:36.987655       6 log.go:172] (0xc0025546e0) (0xc00120b2c0) Stream removed, broadcasting: 1
I0823 10:24:36.987686       6 log.go:172] (0xc0025546e0) (0xc0015e1e00) Stream removed, broadcasting: 3
I0823 10:24:36.987701       6 log.go:172] (0xc0025546e0) (0xc001bd6640) Stream removed, broadcasting: 5
Aug 23 10:24:36.987: INFO: Exec stderr: ""
Aug 23 10:24:36.987: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-rwws4 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 23 10:24:36.987: INFO: >>> kubeConfig: /root/.kube/config
I0823 10:24:37.013841       6 log.go:172] (0xc000b126e0) (0xc0013cbea0) Create stream
I0823 10:24:37.013859       6 log.go:172] (0xc000b126e0) (0xc0013cbea0) Stream added, broadcasting: 1
I0823 10:24:37.021981       6 log.go:172] (0xc000b126e0) Reply frame received for 1
I0823 10:24:37.022041       6 log.go:172] (0xc000b126e0) (0xc0015e1ea0) Create stream
I0823 10:24:37.022056       6 log.go:172] (0xc000b126e0) (0xc0015e1ea0) Stream added, broadcasting: 3
I0823 10:24:37.024261       6 log.go:172] (0xc000b126e0) Reply frame received for 3
I0823 10:24:37.024287       6 log.go:172] (0xc000b126e0) (0xc001bd66e0) Create stream
I0823 10:24:37.024296       6 log.go:172] (0xc000b126e0) (0xc001bd66e0) Stream added, broadcasting: 5
I0823 10:24:37.025032       6 log.go:172] (0xc000b126e0) Reply frame received for 5
I0823 10:24:37.084655       6 log.go:172] (0xc000b126e0) Data frame received for 5
I0823 10:24:37.084689       6 log.go:172] (0xc001bd66e0) (5) Data frame handling
I0823 10:24:37.084714       6 log.go:172] (0xc000b126e0) Data frame received for 3
I0823 10:24:37.084844       6 log.go:172] (0xc0015e1ea0) (3) Data frame handling
I0823 10:24:37.084873       6 log.go:172] (0xc0015e1ea0) (3) Data frame sent
I0823 10:24:37.084884       6 log.go:172] (0xc000b126e0) Data frame received for 3
I0823 10:24:37.084891       6 log.go:172] (0xc0015e1ea0) (3) Data frame handling
I0823 10:24:37.086211       6 log.go:172] (0xc000b126e0) Data frame received for 1
I0823 10:24:37.086242       6 log.go:172] (0xc0013cbea0) (1) Data frame handling
I0823 10:24:37.086251       6 log.go:172] (0xc0013cbea0) (1) Data frame sent
I0823 10:24:37.086270       6 log.go:172] (0xc000b126e0) (0xc0013cbea0) Stream removed, broadcasting: 1
I0823 10:24:37.086349       6 log.go:172] (0xc000b126e0) Go away received
I0823 10:24:37.086398       6 log.go:172] (0xc000b126e0) (0xc0013cbea0) Stream removed, broadcasting: 1
I0823 10:24:37.086421       6 log.go:172] (0xc000b126e0) (0xc0015e1ea0) Stream removed, broadcasting: 3
I0823 10:24:37.086433       6 log.go:172] (0xc000b126e0) (0xc001bd66e0) Stream removed, broadcasting: 5
Aug 23 10:24:37.086: INFO: Exec stderr: ""
Aug 23 10:24:37.086: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-rwws4 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 23 10:24:37.086: INFO: >>> kubeConfig: /root/.kube/config
I0823 10:24:37.114938       6 log.go:172] (0xc001dd22c0) (0xc001ffa140) Create stream
I0823 10:24:37.114980       6 log.go:172] (0xc001dd22c0) (0xc001ffa140) Stream added, broadcasting: 1
I0823 10:24:37.122064       6 log.go:172] (0xc001dd22c0) Reply frame received for 1
I0823 10:24:37.122135       6 log.go:172] (0xc001dd22c0) (0xc001e581e0) Create stream
I0823 10:24:37.122162       6 log.go:172] (0xc001dd22c0) (0xc001e581e0) Stream added, broadcasting: 3
I0823 10:24:37.127548       6 log.go:172] (0xc001dd22c0) Reply frame received for 3
I0823 10:24:37.127641       6 log.go:172] (0xc001dd22c0) (0xc00187a140) Create stream
I0823 10:24:37.127674       6 log.go:172] (0xc001dd22c0) (0xc00187a140) Stream added, broadcasting: 5
I0823 10:24:37.129136       6 log.go:172] (0xc001dd22c0) Reply frame received for 5
I0823 10:24:37.191961       6 log.go:172] (0xc001dd22c0) Data frame received for 5
I0823 10:24:37.192019       6 log.go:172] (0xc00187a140) (5) Data frame handling
I0823 10:24:37.192072       6 log.go:172] (0xc001dd22c0) Data frame received for 3
I0823 10:24:37.192098       6 log.go:172] (0xc001e581e0) (3) Data frame handling
I0823 10:24:37.192115       6 log.go:172] (0xc001e581e0) (3) Data frame sent
I0823 10:24:37.192132       6 log.go:172] (0xc001dd22c0) Data frame received for 3
I0823 10:24:37.192154       6 log.go:172] (0xc001e581e0) (3) Data frame handling
I0823 10:24:37.193684       6 log.go:172] (0xc001dd22c0) Data frame received for 1
I0823 10:24:37.193729       6 log.go:172] (0xc001ffa140) (1) Data frame handling
I0823 10:24:37.193749       6 log.go:172] (0xc001ffa140) (1) Data frame sent
I0823 10:24:37.193762       6 log.go:172] (0xc001dd22c0) (0xc001ffa140) Stream removed, broadcasting: 1
I0823 10:24:37.193775       6 log.go:172] (0xc001dd22c0) Go away received
I0823 10:24:37.193895       6 log.go:172] (0xc001dd22c0) (0xc001ffa140) Stream removed, broadcasting: 1
I0823 10:24:37.193911       6 log.go:172] (0xc001dd22c0) (0xc001e581e0) Stream removed, broadcasting: 3
I0823 10:24:37.193917       6 log.go:172] (0xc001dd22c0) (0xc00187a140) Stream removed, broadcasting: 5
Aug 23 10:24:37.193: INFO: Exec stderr: ""
STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount
Aug 23 10:24:37.193: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-rwws4 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 23 10:24:37.193: INFO: >>> kubeConfig: /root/.kube/config
I0823 10:24:37.220473       6 log.go:172] (0xc001dd20b0) (0xc00187a3c0) Create stream
I0823 10:24:37.220511       6 log.go:172] (0xc001dd20b0) (0xc00187a3c0) Stream added, broadcasting: 1
I0823 10:24:37.222146       6 log.go:172] (0xc001dd20b0) Reply frame received for 1
I0823 10:24:37.222180       6 log.go:172] (0xc001dd20b0) (0xc001bee000) Create stream
I0823 10:24:37.222191       6 log.go:172] (0xc001dd20b0) (0xc001bee000) Stream added, broadcasting: 3
I0823 10:24:37.222985       6 log.go:172] (0xc001dd20b0) Reply frame received for 3
I0823 10:24:37.223033       6 log.go:172] (0xc001dd20b0) (0xc002110140) Create stream
I0823 10:24:37.223065       6 log.go:172] (0xc001dd20b0) (0xc002110140) Stream added, broadcasting: 5
I0823 10:24:37.223988       6 log.go:172] (0xc001dd20b0) Reply frame received for 5
I0823 10:24:37.292707       6 log.go:172] (0xc001dd20b0) Data frame received for 5
I0823 10:24:37.292831       6 log.go:172] (0xc002110140) (5) Data frame handling
I0823 10:24:37.292881       6 log.go:172] (0xc001dd20b0) Data frame received for 3
I0823 10:24:37.292903       6 log.go:172] (0xc001bee000) (3) Data frame handling
I0823 10:24:37.292922       6 log.go:172] (0xc001bee000) (3) Data frame sent
I0823 10:24:37.292935       6 log.go:172] (0xc001dd20b0) Data frame received for 3
I0823 10:24:37.292947       6 log.go:172] (0xc001bee000) (3) Data frame handling
I0823 10:24:37.293996       6 log.go:172] (0xc001dd20b0) Data frame received for 1
I0823 10:24:37.294019       6 log.go:172] (0xc00187a3c0) (1) Data frame handling
I0823 10:24:37.294029       6 log.go:172] (0xc00187a3c0) (1) Data frame sent
I0823 10:24:37.294042       6 log.go:172] (0xc001dd20b0) (0xc00187a3c0) Stream removed, broadcasting: 1
I0823 10:24:37.294056       6 log.go:172] (0xc001dd20b0) Go away received
I0823 10:24:37.294212       6 log.go:172] (0xc001dd20b0) (0xc00187a3c0) Stream removed, broadcasting: 1
I0823 10:24:37.294241       6 log.go:172] (0xc001dd20b0) (0xc001bee000) Stream removed, broadcasting: 3
I0823 10:24:37.294256       6 log.go:172] (0xc001dd20b0) (0xc002110140) Stream removed, broadcasting: 5
Aug 23 10:24:37.294: INFO: Exec stderr: ""
Aug 23 10:24:37.294: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-rwws4 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 23 10:24:37.294: INFO: >>> kubeConfig: /root/.kube/config
I0823 10:24:37.450509       6 log.go:172] (0xc001dd2630) (0xc00187a6e0) Create stream
I0823 10:24:37.450545       6 log.go:172] (0xc001dd2630) (0xc00187a6e0) Stream added, broadcasting: 1
I0823 10:24:37.452187       6 log.go:172] (0xc001dd2630) Reply frame received for 1
I0823 10:24:37.452212       6 log.go:172] (0xc001dd2630) (0xc0018b0000) Create stream
I0823 10:24:37.452228       6 log.go:172] (0xc001dd2630) (0xc0018b0000) Stream added, broadcasting: 3
I0823 10:24:37.452930       6 log.go:172] (0xc001dd2630) Reply frame received for 3
I0823 10:24:37.452951       6 log.go:172] (0xc001dd2630) (0xc0021103c0) Create stream
I0823 10:24:37.452960       6 log.go:172] (0xc001dd2630) (0xc0021103c0) Stream added, broadcasting: 5
I0823 10:24:37.453554       6 log.go:172] (0xc001dd2630) Reply frame received for 5
I0823 10:24:37.501401       6 log.go:172] (0xc001dd2630) Data frame received for 3
I0823 10:24:37.501428       6 log.go:172] (0xc0018b0000) (3) Data frame handling
I0823 10:24:37.501452       6 log.go:172] (0xc0018b0000) (3) Data frame sent
I0823 10:24:37.501471       6 log.go:172] (0xc001dd2630) Data frame received for 3
I0823 10:24:37.501481       6 log.go:172] (0xc0018b0000) (3) Data frame handling
I0823 10:24:37.501501       6 log.go:172] (0xc001dd2630) Data frame received for 5
I0823 10:24:37.501516       6 log.go:172] (0xc0021103c0) (5) Data frame handling
I0823 10:24:37.502889       6 log.go:172] (0xc001dd2630) Data frame received for 1
I0823 10:24:37.502911       6 log.go:172] (0xc00187a6e0) (1) Data frame handling
I0823 10:24:37.502934       6 log.go:172] (0xc00187a6e0) (1) Data frame sent
I0823 10:24:37.502950       6 log.go:172] (0xc001dd2630) (0xc00187a6e0) Stream removed, broadcasting: 1
I0823 10:24:37.503043       6 log.go:172] (0xc001dd2630) (0xc00187a6e0) Stream removed, broadcasting: 1
I0823 10:24:37.503061       6 log.go:172] (0xc001dd2630) (0xc0018b0000) Stream removed, broadcasting: 3
I0823 10:24:37.503075       6 log.go:172] (0xc001dd2630) (0xc0021103c0) Stream removed, broadcasting: 5
Aug 23 10:24:37.503: INFO: Exec stderr: ""
STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true
I0823 10:24:37.503130       6 log.go:172] (0xc001dd2630) Go away received
Aug 23 10:24:37.503: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-rwws4 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 23 10:24:37.503: INFO: >>> kubeConfig: /root/.kube/config
I0823 10:24:37.533093       6 log.go:172] (0xc001dd2b00) (0xc00187a960) Create stream
I0823 10:24:37.533129       6 log.go:172] (0xc001dd2b00) (0xc00187a960) Stream added, broadcasting: 1
I0823 10:24:37.534655       6 log.go:172] (0xc001dd2b00) Reply frame received for 1
I0823 10:24:37.534683       6 log.go:172] (0xc001dd2b00) (0xc00187aa00) Create stream
I0823 10:24:37.534691       6 log.go:172] (0xc001dd2b00) (0xc00187aa00) Stream added, broadcasting: 3
I0823 10:24:37.535293       6 log.go:172] (0xc001dd2b00) Reply frame received for 3
I0823 10:24:37.535312       6 log.go:172] (0xc001dd2b00) (0xc00187aaa0) Create stream
I0823 10:24:37.535318       6 log.go:172] (0xc001dd2b00) (0xc00187aaa0) Stream added, broadcasting: 5
I0823 10:24:37.536041       6 log.go:172] (0xc001dd2b00) Reply frame received for 5
I0823 10:24:37.597170       6 log.go:172] (0xc001dd2b00) Data frame received for 5
I0823 10:24:37.597216       6 log.go:172] (0xc00187aaa0) (5) Data frame handling
I0823 10:24:37.597252       6 log.go:172] (0xc001dd2b00) Data frame received for 3
I0823 10:24:37.597276       6 log.go:172] (0xc00187aa00) (3) Data frame handling
I0823 10:24:37.597299       6 log.go:172] (0xc00187aa00) (3) Data frame sent
I0823 10:24:37.597312       6 log.go:172] (0xc001dd2b00) Data frame received for 3
I0823 10:24:37.597322       6 log.go:172] (0xc00187aa00) (3) Data frame handling
I0823 10:24:37.599288       6 log.go:172] (0xc001dd2b00) Data frame received for 1
I0823 10:24:37.599314       6 log.go:172] (0xc00187a960) (1) Data frame handling
I0823 10:24:37.599332       6 log.go:172] (0xc00187a960) (1) Data frame sent
I0823 10:24:37.599355       6 log.go:172] (0xc001dd2b00) (0xc00187a960) Stream removed, broadcasting: 1
I0823 10:24:37.599376       6 log.go:172] (0xc001dd2b00) Go away received
I0823 10:24:37.599507       6 log.go:172] (0xc001dd2b00) (0xc00187a960) Stream removed, broadcasting: 1
I0823 10:24:37.599534       6 log.go:172] (0xc001dd2b00) (0xc00187aa00) Stream removed, broadcasting: 3
I0823 10:24:37.599543       6 log.go:172] (0xc001dd2b00) (0xc00187aaa0) Stream removed, broadcasting: 5
Aug 23 10:24:37.599: INFO: Exec stderr: ""
Aug 23 10:24:37.599: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-rwws4 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 23 10:24:37.599: INFO: >>> kubeConfig: /root/.kube/config
I0823 10:24:37.627689       6 log.go:172] (0xc002aac4d0) (0xc001bee460) Create stream
I0823 10:24:37.627735       6 log.go:172] (0xc002aac4d0) (0xc001bee460) Stream added, broadcasting: 1
I0823 10:24:37.630548       6 log.go:172] (0xc002aac4d0) Reply frame received for 1
I0823 10:24:37.630598       6 log.go:172] (0xc002aac4d0) (0xc001bee500) Create stream
I0823 10:24:37.630616       6 log.go:172] (0xc002aac4d0) (0xc001bee500) Stream added, broadcasting: 3
I0823 10:24:37.631632       6 log.go:172] (0xc002aac4d0) Reply frame received for 3
I0823 10:24:37.631658       6 log.go:172] (0xc002aac4d0) (0xc002110460) Create stream
I0823 10:24:37.631667       6 log.go:172] (0xc002aac4d0) (0xc002110460) Stream added, broadcasting: 5
I0823 10:24:37.632825       6 log.go:172] (0xc002aac4d0) Reply frame received for 5
I0823 10:24:37.702991       6 log.go:172] (0xc002aac4d0) Data frame received for 3
I0823 10:24:37.703091       6 log.go:172] (0xc001bee500) (3) Data frame handling
I0823 10:24:37.703133       6 log.go:172] (0xc001bee500) (3) Data frame sent
I0823 10:24:37.703147       6 log.go:172] (0xc002aac4d0) Data frame received for 3
I0823 10:24:37.703165       6 log.go:172] (0xc001bee500) (3) Data frame handling
I0823 10:24:37.703237       6 log.go:172] (0xc002aac4d0) Data frame received for 5
I0823 10:24:37.703259       6 log.go:172] (0xc002110460) (5) Data frame handling
I0823 10:24:37.704579       6 log.go:172] (0xc002aac4d0) Data frame received for 1
I0823 10:24:37.704606       6 log.go:172] (0xc001bee460) (1) Data frame handling
I0823 10:24:37.704627       6 log.go:172] (0xc001bee460) (1) Data frame sent
I0823 10:24:37.704649       6 log.go:172] (0xc002aac4d0) (0xc001bee460) Stream removed, broadcasting: 1
I0823 10:24:37.704675       6 log.go:172] (0xc002aac4d0) Go away received
I0823 10:24:37.704909       6 log.go:172] (0xc002aac4d0) (0xc001bee460) Stream removed, broadcasting: 1
I0823 10:24:37.704941       6 log.go:172] (0xc002aac4d0) (0xc001bee500) Stream removed, broadcasting: 3
I0823 10:24:37.704963       6 log.go:172] (0xc002aac4d0) (0xc002110460) Stream removed, broadcasting: 5
Aug 23 10:24:37.704: INFO: Exec stderr: ""
Aug 23 10:24:37.705: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-rwws4 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 23 10:24:37.705: INFO: >>> kubeConfig: /root/.kube/config
I0823 10:24:37.976872       6 log.go:172] (0xc000aa3130) (0xc0018b0280) Create stream
I0823 10:24:37.976904       6 log.go:172] (0xc000aa3130) (0xc0018b0280) Stream added, broadcasting: 1
I0823 10:24:37.978970       6 log.go:172] (0xc000aa3130) Reply frame received for 1
I0823 10:24:37.979001       6 log.go:172] (0xc000aa3130) (0xc001bee5a0) Create stream
I0823 10:24:37.979012       6 log.go:172] (0xc000aa3130) (0xc001bee5a0) Stream added, broadcasting: 3
I0823 10:24:37.980152       6 log.go:172] (0xc000aa3130) Reply frame received for 3
I0823 10:24:37.980192       6 log.go:172] (0xc000aa3130) (0xc001bee640) Create stream
I0823 10:24:37.980206       6 log.go:172] (0xc000aa3130) (0xc001bee640) Stream added, broadcasting: 5
I0823 10:24:37.981293       6 log.go:172] (0xc000aa3130) Reply frame received for 5
I0823 10:24:38.041006       6 log.go:172] (0xc000aa3130) Data frame received for 3
I0823 10:24:38.041030       6 log.go:172] (0xc001bee5a0) (3) Data frame handling
I0823 10:24:38.041039       6 log.go:172] (0xc001bee5a0) (3) Data frame sent
I0823 10:24:38.041042       6 log.go:172] (0xc000aa3130) Data frame received for 3
I0823 10:24:38.041046       6 log.go:172] (0xc001bee5a0) (3) Data frame handling
I0823 10:24:38.041064       6 log.go:172] (0xc000aa3130) Data frame received for 5
I0823 10:24:38.041072       6 log.go:172] (0xc001bee640) (5) Data frame handling
I0823 10:24:38.042442       6 log.go:172] (0xc000aa3130) Data frame received for 1
I0823 10:24:38.042455       6 log.go:172] (0xc0018b0280) (1) Data frame handling
I0823 10:24:38.042462       6 log.go:172] (0xc0018b0280) (1) Data frame sent
I0823 10:24:38.042471       6 log.go:172] (0xc000aa3130) (0xc0018b0280) Stream removed, broadcasting: 1
I0823 10:24:38.042538       6 log.go:172] (0xc000aa3130) (0xc0018b0280) Stream removed, broadcasting: 1
I0823 10:24:38.042548       6 log.go:172] (0xc000aa3130) (0xc001bee5a0) Stream removed, broadcasting: 3
I0823 10:24:38.042698       6 log.go:172] (0xc000aa3130) Go away received
I0823 10:24:38.042717       6 log.go:172] (0xc000aa3130) (0xc001bee640) Stream removed, broadcasting: 5
Aug 23 10:24:38.042: INFO: Exec stderr: ""
Aug 23 10:24:38.042: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-rwws4 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 23 10:24:38.042: INFO: >>> kubeConfig: /root/.kube/config
I0823 10:24:38.067846       6 log.go:172] (0xc000b126e0) (0xc002110640) Create stream
I0823 10:24:38.067894       6 log.go:172] (0xc000b126e0) (0xc002110640) Stream added, broadcasting: 1
I0823 10:24:38.070134       6 log.go:172] (0xc000b126e0) Reply frame received for 1
I0823 10:24:38.070182       6 log.go:172] (0xc000b126e0) (0xc0021106e0) Create stream
I0823 10:24:38.070196       6 log.go:172] (0xc000b126e0) (0xc0021106e0) Stream added, broadcasting: 3
I0823 10:24:38.071100       6 log.go:172] (0xc000b126e0) Reply frame received for 3
I0823 10:24:38.071134       6 log.go:172] (0xc000b126e0) (0xc001bee6e0) Create stream
I0823 10:24:38.071146       6 log.go:172] (0xc000b126e0) (0xc001bee6e0) Stream added, broadcasting: 5
I0823 10:24:38.071982       6 log.go:172] (0xc000b126e0) Reply frame received for 5
I0823 10:24:38.138426       6 log.go:172] (0xc000b126e0) Data frame received for 5
I0823 10:24:38.138494       6 log.go:172] (0xc001bee6e0) (5) Data frame handling
I0823 10:24:38.138543       6 log.go:172] (0xc000b126e0) Data frame received for 3
I0823 10:24:38.138575       6 log.go:172] (0xc0021106e0) (3) Data frame handling
I0823 10:24:38.138621       6 log.go:172] (0xc0021106e0) (3) Data frame sent
I0823 10:24:38.138643       6 log.go:172] (0xc000b126e0) Data frame received for 3
I0823 10:24:38.138653       6 log.go:172] (0xc0021106e0) (3) Data frame handling
I0823 10:24:38.139916       6 log.go:172] (0xc000b126e0) Data frame received for 1
I0823 10:24:38.139943       6 log.go:172] (0xc002110640) (1) Data frame handling
I0823 10:24:38.139964       6 log.go:172] (0xc002110640) (1) Data frame sent
I0823 10:24:38.140001       6 log.go:172] (0xc000b126e0) (0xc002110640) Stream removed, broadcasting: 1
I0823 10:24:38.140036       6 log.go:172] (0xc000b126e0) Go away received
I0823 10:24:38.140157       6 log.go:172] (0xc000b126e0) (0xc002110640) Stream removed, broadcasting: 1
I0823 10:24:38.140181       6 log.go:172] (0xc000b126e0) (0xc0021106e0) Stream removed, broadcasting: 3
I0823 10:24:38.140196       6 log.go:172] (0xc000b126e0) (0xc001bee6e0) Stream removed, broadcasting: 5
Aug 23 10:24:38.140: INFO: Exec stderr: ""
[AfterEach] [k8s.io] KubeletManagedEtcHosts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 23 10:24:38.140: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-e2e-kubelet-etc-hosts-rwws4" for this suite.
Aug 23 10:25:32.341: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 23 10:25:32.372: INFO: namespace: e2e-tests-e2e-kubelet-etc-hosts-rwws4, resource: bindings, ignored listing per whitelist
Aug 23 10:25:32.411: INFO: namespace e2e-tests-e2e-kubelet-etc-hosts-rwws4 deletion completed in 54.159554123s

• [SLOW TEST:67.900 seconds]
[k8s.io] KubeletManagedEtcHosts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should test kubelet managed /etc/hosts file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  volume on default medium should have the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 23 10:25:32.411: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on default medium should have the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir volume type on node default medium
Aug 23 10:25:32.922: INFO: Waiting up to 5m0s for pod "pod-f9b37f98-e52a-11ea-87d5-0242ac11000a" in namespace "e2e-tests-emptydir-6hrrp" to be "success or failure"
Aug 23 10:25:32.962: INFO: Pod "pod-f9b37f98-e52a-11ea-87d5-0242ac11000a": Phase="Pending", Reason="", readiness=false. Elapsed: 39.851691ms
Aug 23 10:25:34.966: INFO: Pod "pod-f9b37f98-e52a-11ea-87d5-0242ac11000a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.044109491s
Aug 23 10:25:36.970: INFO: Pod "pod-f9b37f98-e52a-11ea-87d5-0242ac11000a": Phase="Running", Reason="", readiness=true. Elapsed: 4.04790031s
Aug 23 10:25:38.974: INFO: Pod "pod-f9b37f98-e52a-11ea-87d5-0242ac11000a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.052267259s
STEP: Saw pod success
Aug 23 10:25:38.974: INFO: Pod "pod-f9b37f98-e52a-11ea-87d5-0242ac11000a" satisfied condition "success or failure"
Aug 23 10:25:38.978: INFO: Trying to get logs from node hunter-worker2 pod pod-f9b37f98-e52a-11ea-87d5-0242ac11000a container test-container: 
STEP: delete the pod
Aug 23 10:25:38.999: INFO: Waiting for pod pod-f9b37f98-e52a-11ea-87d5-0242ac11000a to disappear
Aug 23 10:25:39.003: INFO: Pod pod-f9b37f98-e52a-11ea-87d5-0242ac11000a no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 23 10:25:39.003: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-6hrrp" for this suite.
Aug 23 10:25:45.019: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 23 10:25:45.033: INFO: namespace: e2e-tests-emptydir-6hrrp, resource: bindings, ignored listing per whitelist
Aug 23 10:25:45.081: INFO: namespace e2e-tests-emptydir-6hrrp deletion completed in 6.074454672s

• [SLOW TEST:12.670 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  volume on default medium should have the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should have an terminated reason [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 23 10:25:45.081: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[BeforeEach] when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81
[It] should have an terminated reason [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 23 10:25:57.865: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubelet-test-l6z8x" for this suite.
Aug 23 10:26:08.144: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 23 10:26:08.682: INFO: namespace: e2e-tests-kubelet-test-l6z8x, resource: bindings, ignored listing per whitelist
Aug 23 10:26:08.697: INFO: namespace e2e-tests-kubelet-test-l6z8x deletion completed in 10.829105684s

• [SLOW TEST:23.616 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78
    should have an terminated reason [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 23 10:26:08.698: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name cm-test-opt-del-0ff32cd6-e52b-11ea-87d5-0242ac11000a
STEP: Creating configMap with name cm-test-opt-upd-0ff32d32-e52b-11ea-87d5-0242ac11000a
STEP: Creating the pod
STEP: Deleting configmap cm-test-opt-del-0ff32cd6-e52b-11ea-87d5-0242ac11000a
STEP: Updating configmap cm-test-opt-upd-0ff32d32-e52b-11ea-87d5-0242ac11000a
STEP: Creating configMap with name cm-test-opt-create-0ff32d52-e52b-11ea-87d5-0242ac11000a
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 23 10:27:49.935: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-9lntk" for this suite.
Aug 23 10:28:20.517: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 23 10:28:20.531: INFO: namespace: e2e-tests-projected-9lntk, resource: bindings, ignored listing per whitelist
Aug 23 10:28:20.578: INFO: namespace e2e-tests-projected-9lntk deletion completed in 30.639375005s

• [SLOW TEST:131.880 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 23 10:28:20.578: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name cm-test-opt-del-5eedb9a1-e52b-11ea-87d5-0242ac11000a
STEP: Creating configMap with name cm-test-opt-upd-5eedba03-e52b-11ea-87d5-0242ac11000a
STEP: Creating the pod
STEP: Deleting configmap cm-test-opt-del-5eedb9a1-e52b-11ea-87d5-0242ac11000a
STEP: Updating configmap cm-test-opt-upd-5eedba03-e52b-11ea-87d5-0242ac11000a
STEP: Creating configMap with name cm-test-opt-create-5eedba28-e52b-11ea-87d5-0242ac11000a
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 23 10:29:52.560: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-wxtkr" for this suite.
Aug 23 10:30:21.188: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 23 10:30:21.244: INFO: namespace: e2e-tests-configmap-wxtkr, resource: bindings, ignored listing per whitelist
Aug 23 10:30:21.268: INFO: namespace e2e-tests-configmap-wxtkr deletion completed in 28.510453782s

• [SLOW TEST:120.690 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 23 10:30:21.268: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43
[It] should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
Aug 23 10:30:22.245: INFO: PodSpec: initContainers in spec.initContainers
Aug 23 10:31:25.957: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-a62b98a8-e52b-11ea-87d5-0242ac11000a", GenerateName:"", Namespace:"e2e-tests-init-container-tgxgk", SelfLink:"/api/v1/namespaces/e2e-tests-init-container-tgxgk/pods/pod-init-a62b98a8-e52b-11ea-87d5-0242ac11000a", UID:"a686a3dd-e52b-11ea-a485-0242ac120004", ResourceVersion:"1690406", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63733775422, loc:(*time.Location)(0x7950ac0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"245094647"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-gm6vr", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc001fe6300), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-gm6vr", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-gm6vr", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-gm6vr", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc0028182a8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"hunter-worker2", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc002496180), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc002818330)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc002818350)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc002818358), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc00281835c)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733775424, loc:(*time.Location)(0x7950ac0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733775424, loc:(*time.Location)(0x7950ac0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733775424, loc:(*time.Location)(0x7950ac0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733775422, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.18.0.8", PodIP:"10.244.2.131", StartTime:(*v1.Time)(0xc00145c100), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(0xc00145c140), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc0024fe150)}, Ready:false, RestartCount:3, Image:"docker.io/library/busybox:1.29", ImageID:"docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"containerd://56e1f57a770786b7c422347b0f017684b1c292e8046b51c220cec6401ab222dc"}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc00145c160), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:""}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc00145c120), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:""}}, QOSClass:"Guaranteed"}}
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 23 10:31:25.958: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-init-container-tgxgk" for this suite.
Aug 23 10:31:51.797: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 23 10:31:51.840: INFO: namespace: e2e-tests-init-container-tgxgk, resource: bindings, ignored listing per whitelist
Aug 23 10:31:51.867: INFO: namespace e2e-tests-init-container-tgxgk deletion completed in 25.534131283s

• [SLOW TEST:90.599 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicaSet 
  should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 23 10:31:51.868: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replicaset
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Given a Pod with a 'name' label pod-adoption-release is created
STEP: When a replicaset with a matching selector is created
STEP: Then the orphan pod is adopted
STEP: When the matched label of one of its pods change
Aug 23 10:31:59.373: INFO: Pod name pod-adoption-release: Found 1 pods out of 1
STEP: Then the pod is released
[AfterEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 23 10:32:00.393: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-replicaset-tfb82" for this suite.
Aug 23 10:32:24.503: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 23 10:32:24.513: INFO: namespace: e2e-tests-replicaset-tfb82, resource: bindings, ignored listing per whitelist
Aug 23 10:32:24.564: INFO: namespace e2e-tests-replicaset-tfb82 deletion completed in 24.167818665s

• [SLOW TEST:32.696 seconds]
[sig-apps] ReplicaSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 23 10:32:24.564: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating projection with configMap that has name projected-configmap-test-upd-ef42a8aa-e52b-11ea-87d5-0242ac11000a
STEP: Creating the pod
STEP: Updating configmap projected-configmap-test-upd-ef42a8aa-e52b-11ea-87d5-0242ac11000a
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 23 10:34:00.025: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-mwnmm" for this suite.
Aug 23 10:34:28.260: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 23 10:34:28.366: INFO: namespace: e2e-tests-projected-mwnmm, resource: bindings, ignored listing per whitelist
Aug 23 10:34:28.373: INFO: namespace e2e-tests-projected-mwnmm deletion completed in 28.343191484s

• [SLOW TEST:123.809 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[k8s.io] Pods 
  should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 23 10:34:28.373: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Aug 23 10:34:34.674: INFO: Waiting up to 5m0s for pod "client-envvars-3c9faea9-e52c-11ea-87d5-0242ac11000a" in namespace "e2e-tests-pods-52f69" to be "success or failure"
Aug 23 10:34:34.716: INFO: Pod "client-envvars-3c9faea9-e52c-11ea-87d5-0242ac11000a": Phase="Pending", Reason="", readiness=false. Elapsed: 41.552043ms
Aug 23 10:34:36.719: INFO: Pod "client-envvars-3c9faea9-e52c-11ea-87d5-0242ac11000a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.04497554s
Aug 23 10:34:38.723: INFO: Pod "client-envvars-3c9faea9-e52c-11ea-87d5-0242ac11000a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.048316184s
STEP: Saw pod success
Aug 23 10:34:38.723: INFO: Pod "client-envvars-3c9faea9-e52c-11ea-87d5-0242ac11000a" satisfied condition "success or failure"
Aug 23 10:34:38.725: INFO: Trying to get logs from node hunter-worker pod client-envvars-3c9faea9-e52c-11ea-87d5-0242ac11000a container env3cont: 
STEP: delete the pod
Aug 23 10:34:38.928: INFO: Waiting for pod client-envvars-3c9faea9-e52c-11ea-87d5-0242ac11000a to disappear
Aug 23 10:34:38.956: INFO: Pod client-envvars-3c9faea9-e52c-11ea-87d5-0242ac11000a no longer exists
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 23 10:34:38.956: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-52f69" for this suite.
Aug 23 10:35:35.396: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 23 10:35:37.265: INFO: namespace: e2e-tests-pods-52f69, resource: bindings, ignored listing per whitelist
Aug 23 10:35:37.268: INFO: namespace e2e-tests-pods-52f69 deletion completed in 58.309361733s

• [SLOW TEST:68.895 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 23 10:35:37.269: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0644 on node default medium
Aug 23 10:35:38.576: INFO: Waiting up to 5m0s for pod "pod-62b6febe-e52c-11ea-87d5-0242ac11000a" in namespace "e2e-tests-emptydir-6hp7w" to be "success or failure"
Aug 23 10:35:39.058: INFO: Pod "pod-62b6febe-e52c-11ea-87d5-0242ac11000a": Phase="Pending", Reason="", readiness=false. Elapsed: 482.090812ms
Aug 23 10:35:41.062: INFO: Pod "pod-62b6febe-e52c-11ea-87d5-0242ac11000a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.486290718s
Aug 23 10:35:43.064: INFO: Pod "pod-62b6febe-e52c-11ea-87d5-0242ac11000a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.488902896s
Aug 23 10:35:45.068: INFO: Pod "pod-62b6febe-e52c-11ea-87d5-0242ac11000a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.492166683s
Aug 23 10:35:47.181: INFO: Pod "pod-62b6febe-e52c-11ea-87d5-0242ac11000a": Phase="Pending", Reason="", readiness=false. Elapsed: 8.605296137s
Aug 23 10:35:49.183: INFO: Pod "pod-62b6febe-e52c-11ea-87d5-0242ac11000a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.607760174s
STEP: Saw pod success
Aug 23 10:35:49.183: INFO: Pod "pod-62b6febe-e52c-11ea-87d5-0242ac11000a" satisfied condition "success or failure"
Aug 23 10:35:49.185: INFO: Trying to get logs from node hunter-worker2 pod pod-62b6febe-e52c-11ea-87d5-0242ac11000a container test-container: 
STEP: delete the pod
Aug 23 10:35:50.334: INFO: Waiting for pod pod-62b6febe-e52c-11ea-87d5-0242ac11000a to disappear
Aug 23 10:35:50.605: INFO: Pod pod-62b6febe-e52c-11ea-87d5-0242ac11000a no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 23 10:35:50.605: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-6hp7w" for this suite.
Aug 23 10:35:59.176: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 23 10:35:59.214: INFO: namespace: e2e-tests-emptydir-6hp7w, resource: bindings, ignored listing per whitelist
Aug 23 10:35:59.230: INFO: namespace e2e-tests-emptydir-6hp7w deletion completed in 8.621463881s

• [SLOW TEST:21.961 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0644,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-network] Proxy version v1 
  should proxy through a service and a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 23 10:35:59.230: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy through a service and a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: starting an echo server on multiple ports
STEP: creating replication controller proxy-service-ncqxp in namespace e2e-tests-proxy-gwwdm
I0823 10:35:59.676983       6 runners.go:184] Created replication controller with name: proxy-service-ncqxp, namespace: e2e-tests-proxy-gwwdm, replica count: 1
I0823 10:36:00.727445       6 runners.go:184] proxy-service-ncqxp Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0823 10:36:01.727673       6 runners.go:184] proxy-service-ncqxp Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0823 10:36:02.727859       6 runners.go:184] proxy-service-ncqxp Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0823 10:36:03.728029       6 runners.go:184] proxy-service-ncqxp Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0823 10:36:04.728314       6 runners.go:184] proxy-service-ncqxp Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0823 10:36:05.728508       6 runners.go:184] proxy-service-ncqxp Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0823 10:36:06.728710       6 runners.go:184] proxy-service-ncqxp Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0823 10:36:07.729034       6 runners.go:184] proxy-service-ncqxp Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0823 10:36:08.729210       6 runners.go:184] proxy-service-ncqxp Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Aug 23 10:36:08.839: INFO: setup took 9.416237945s, starting test cases
STEP: running 16 cases, 20 attempts per case, 320 total attempts
Aug 23 10:36:08.845: INFO: (0) /api/v1/namespaces/e2e-tests-proxy-gwwdm/pods/http:proxy-service-ncqxp-645mt:160/proxy/: foo (200; 6.542176ms)
Aug 23 10:36:08.846: INFO: (0) /api/v1/namespaces/e2e-tests-proxy-gwwdm/services/proxy-service-ncqxp:portname1/proxy/: foo (200; 6.791076ms)
Aug 23 10:36:08.846: INFO: (0) /api/v1/namespaces/e2e-tests-proxy-gwwdm/pods/proxy-service-ncqxp-645mt:1080/proxy/: >> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Aug 23 10:36:35.088: INFO: Waiting up to 5m0s for pod "downwardapi-volume-8432ce06-e52c-11ea-87d5-0242ac11000a" in namespace "e2e-tests-downward-api-86s6q" to be "success or failure"
Aug 23 10:36:35.762: INFO: Pod "downwardapi-volume-8432ce06-e52c-11ea-87d5-0242ac11000a": Phase="Pending", Reason="", readiness=false. Elapsed: 674.585996ms
Aug 23 10:36:37.766: INFO: Pod "downwardapi-volume-8432ce06-e52c-11ea-87d5-0242ac11000a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.678593646s
Aug 23 10:36:39.989: INFO: Pod "downwardapi-volume-8432ce06-e52c-11ea-87d5-0242ac11000a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.90080778s
Aug 23 10:36:42.193: INFO: Pod "downwardapi-volume-8432ce06-e52c-11ea-87d5-0242ac11000a": Phase="Pending", Reason="", readiness=false. Elapsed: 7.105137833s
Aug 23 10:36:44.198: INFO: Pod "downwardapi-volume-8432ce06-e52c-11ea-87d5-0242ac11000a": Phase="Pending", Reason="", readiness=false. Elapsed: 9.110013979s
Aug 23 10:36:46.422: INFO: Pod "downwardapi-volume-8432ce06-e52c-11ea-87d5-0242ac11000a": Phase="Pending", Reason="", readiness=false. Elapsed: 11.333937996s
Aug 23 10:36:48.426: INFO: Pod "downwardapi-volume-8432ce06-e52c-11ea-87d5-0242ac11000a": Phase="Running", Reason="", readiness=true. Elapsed: 13.338345601s
Aug 23 10:36:50.430: INFO: Pod "downwardapi-volume-8432ce06-e52c-11ea-87d5-0242ac11000a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 15.342758879s
STEP: Saw pod success
Aug 23 10:36:50.431: INFO: Pod "downwardapi-volume-8432ce06-e52c-11ea-87d5-0242ac11000a" satisfied condition "success or failure"
Aug 23 10:36:50.434: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-8432ce06-e52c-11ea-87d5-0242ac11000a container client-container: 
STEP: delete the pod
Aug 23 10:36:52.034: INFO: Waiting for pod downwardapi-volume-8432ce06-e52c-11ea-87d5-0242ac11000a to disappear
Aug 23 10:36:52.474: INFO: Pod downwardapi-volume-8432ce06-e52c-11ea-87d5-0242ac11000a no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 23 10:36:52.474: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-86s6q" for this suite.
Aug 23 10:37:04.350: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 23 10:37:04.976: INFO: namespace: e2e-tests-downward-api-86s6q, resource: bindings, ignored listing per whitelist
Aug 23 10:37:05.019: INFO: namespace e2e-tests-downward-api-86s6q deletion completed in 12.541234198s

• [SLOW TEST:32.188 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 23 10:37:05.019: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test substitution in container's command
Aug 23 10:37:05.754: INFO: Waiting up to 5m0s for pod "var-expansion-96ad21eb-e52c-11ea-87d5-0242ac11000a" in namespace "e2e-tests-var-expansion-7x5g6" to be "success or failure"
Aug 23 10:37:05.991: INFO: Pod "var-expansion-96ad21eb-e52c-11ea-87d5-0242ac11000a": Phase="Pending", Reason="", readiness=false. Elapsed: 236.441735ms
Aug 23 10:37:07.994: INFO: Pod "var-expansion-96ad21eb-e52c-11ea-87d5-0242ac11000a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.240001855s
Aug 23 10:37:09.998: INFO: Pod "var-expansion-96ad21eb-e52c-11ea-87d5-0242ac11000a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.243557316s
Aug 23 10:37:12.175: INFO: Pod "var-expansion-96ad21eb-e52c-11ea-87d5-0242ac11000a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.420471301s
Aug 23 10:37:14.178: INFO: Pod "var-expansion-96ad21eb-e52c-11ea-87d5-0242ac11000a": Phase="Running", Reason="", readiness=true. Elapsed: 8.424013732s
Aug 23 10:37:16.361: INFO: Pod "var-expansion-96ad21eb-e52c-11ea-87d5-0242ac11000a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.607216807s
STEP: Saw pod success
Aug 23 10:37:16.361: INFO: Pod "var-expansion-96ad21eb-e52c-11ea-87d5-0242ac11000a" satisfied condition "success or failure"
Aug 23 10:37:16.364: INFO: Trying to get logs from node hunter-worker pod var-expansion-96ad21eb-e52c-11ea-87d5-0242ac11000a container dapi-container: 
STEP: delete the pod
Aug 23 10:37:16.595: INFO: Waiting for pod var-expansion-96ad21eb-e52c-11ea-87d5-0242ac11000a to disappear
Aug 23 10:37:16.766: INFO: Pod var-expansion-96ad21eb-e52c-11ea-87d5-0242ac11000a no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 23 10:37:16.766: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-var-expansion-7x5g6" for this suite.
Aug 23 10:37:25.132: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 23 10:37:25.204: INFO: namespace: e2e-tests-var-expansion-7x5g6, resource: bindings, ignored listing per whitelist
Aug 23 10:37:25.207: INFO: namespace e2e-tests-var-expansion-7x5g6 deletion completed in 8.437248693s

• [SLOW TEST:20.188 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 23 10:37:25.207: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test override all
Aug 23 10:37:25.878: INFO: Waiting up to 5m0s for pod "client-containers-a2a951ac-e52c-11ea-87d5-0242ac11000a" in namespace "e2e-tests-containers-jw4xd" to be "success or failure"
Aug 23 10:37:25.901: INFO: Pod "client-containers-a2a951ac-e52c-11ea-87d5-0242ac11000a": Phase="Pending", Reason="", readiness=false. Elapsed: 22.433767ms
Aug 23 10:37:28.006: INFO: Pod "client-containers-a2a951ac-e52c-11ea-87d5-0242ac11000a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.128160912s
Aug 23 10:37:30.402: INFO: Pod "client-containers-a2a951ac-e52c-11ea-87d5-0242ac11000a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.523662655s
Aug 23 10:37:32.423: INFO: Pod "client-containers-a2a951ac-e52c-11ea-87d5-0242ac11000a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.544993616s
Aug 23 10:37:34.552: INFO: Pod "client-containers-a2a951ac-e52c-11ea-87d5-0242ac11000a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.673675262s
STEP: Saw pod success
Aug 23 10:37:34.552: INFO: Pod "client-containers-a2a951ac-e52c-11ea-87d5-0242ac11000a" satisfied condition "success or failure"
Aug 23 10:37:34.767: INFO: Trying to get logs from node hunter-worker2 pod client-containers-a2a951ac-e52c-11ea-87d5-0242ac11000a container test-container: 
STEP: delete the pod
Aug 23 10:37:35.153: INFO: Waiting for pod client-containers-a2a951ac-e52c-11ea-87d5-0242ac11000a to disappear
Aug 23 10:37:35.574: INFO: Pod client-containers-a2a951ac-e52c-11ea-87d5-0242ac11000a no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 23 10:37:35.574: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-containers-jw4xd" for this suite.
Aug 23 10:37:44.562: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 23 10:37:44.600: INFO: namespace: e2e-tests-containers-jw4xd, resource: bindings, ignored listing per whitelist
Aug 23 10:37:44.637: INFO: namespace e2e-tests-containers-jw4xd deletion completed in 8.659441426s

• [SLOW TEST:19.430 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-storage] Downward API volume 
  should set DefaultMode on files [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 23 10:37:44.637: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should set DefaultMode on files [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Aug 23 10:37:44.813: INFO: Waiting up to 5m0s for pod "downwardapi-volume-adf34611-e52c-11ea-87d5-0242ac11000a" in namespace "e2e-tests-downward-api-jbxss" to be "success or failure"
Aug 23 10:37:44.817: INFO: Pod "downwardapi-volume-adf34611-e52c-11ea-87d5-0242ac11000a": Phase="Pending", Reason="", readiness=false. Elapsed: 3.756665ms
Aug 23 10:37:47.109: INFO: Pod "downwardapi-volume-adf34611-e52c-11ea-87d5-0242ac11000a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.295932628s
Aug 23 10:37:49.113: INFO: Pod "downwardapi-volume-adf34611-e52c-11ea-87d5-0242ac11000a": Phase="Running", Reason="", readiness=true. Elapsed: 4.300525324s
Aug 23 10:37:51.150: INFO: Pod "downwardapi-volume-adf34611-e52c-11ea-87d5-0242ac11000a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.337405801s
STEP: Saw pod success
Aug 23 10:37:51.150: INFO: Pod "downwardapi-volume-adf34611-e52c-11ea-87d5-0242ac11000a" satisfied condition "success or failure"
Aug 23 10:37:51.153: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-adf34611-e52c-11ea-87d5-0242ac11000a container client-container: 
STEP: delete the pod
Aug 23 10:37:51.568: INFO: Waiting for pod downwardapi-volume-adf34611-e52c-11ea-87d5-0242ac11000a to disappear
Aug 23 10:37:51.780: INFO: Pod downwardapi-volume-adf34611-e52c-11ea-87d5-0242ac11000a no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 23 10:37:51.780: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-jbxss" for this suite.
Aug 23 10:37:57.845: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 23 10:37:57.870: INFO: namespace: e2e-tests-downward-api-jbxss, resource: bindings, ignored listing per whitelist
Aug 23 10:37:57.921: INFO: namespace e2e-tests-downward-api-jbxss deletion completed in 6.135614955s

• [SLOW TEST:13.284 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should set DefaultMode on files [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 23 10:37:57.921: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating projection with secret that has name projected-secret-test-map-b5e2b3bb-e52c-11ea-87d5-0242ac11000a
STEP: Creating a pod to test consume secrets
Aug 23 10:37:58.157: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-b5e36c25-e52c-11ea-87d5-0242ac11000a" in namespace "e2e-tests-projected-bxclc" to be "success or failure"
Aug 23 10:37:58.299: INFO: Pod "pod-projected-secrets-b5e36c25-e52c-11ea-87d5-0242ac11000a": Phase="Pending", Reason="", readiness=false. Elapsed: 141.987833ms
Aug 23 10:38:00.303: INFO: Pod "pod-projected-secrets-b5e36c25-e52c-11ea-87d5-0242ac11000a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.146274039s
Aug 23 10:38:02.307: INFO: Pod "pod-projected-secrets-b5e36c25-e52c-11ea-87d5-0242ac11000a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.149849053s
STEP: Saw pod success
Aug 23 10:38:02.307: INFO: Pod "pod-projected-secrets-b5e36c25-e52c-11ea-87d5-0242ac11000a" satisfied condition "success or failure"
Aug 23 10:38:02.309: INFO: Trying to get logs from node hunter-worker2 pod pod-projected-secrets-b5e36c25-e52c-11ea-87d5-0242ac11000a container projected-secret-volume-test: 
STEP: delete the pod
Aug 23 10:38:02.338: INFO: Waiting for pod pod-projected-secrets-b5e36c25-e52c-11ea-87d5-0242ac11000a to disappear
Aug 23 10:38:02.372: INFO: Pod pod-projected-secrets-b5e36c25-e52c-11ea-87d5-0242ac11000a no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 23 10:38:02.372: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-bxclc" for this suite.
Aug 23 10:38:08.391: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 23 10:38:08.460: INFO: namespace: e2e-tests-projected-bxclc, resource: bindings, ignored listing per whitelist
Aug 23 10:38:08.515: INFO: namespace e2e-tests-projected-bxclc deletion completed in 6.139587039s

• [SLOW TEST:10.595 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run default 
  should create an rc or deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 23 10:38:08.516: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl run default
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1262
[It] should create an rc or deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Aug 23 10:38:08.703: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --namespace=e2e-tests-kubectl-dw8l4'
Aug 23 10:38:22.411: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Aug 23 10:38:22.411: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n"
STEP: verifying the pod controlled by e2e-test-nginx-deployment gets created
[AfterEach] [k8s.io] Kubectl run default
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1268
Aug 23 10:38:24.448: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=e2e-tests-kubectl-dw8l4'
Aug 23 10:38:24.870: INFO: stderr: ""
Aug 23 10:38:24.870: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 23 10:38:24.870: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-dw8l4" for this suite.
Aug 23 10:38:31.145: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 23 10:38:31.192: INFO: namespace: e2e-tests-kubectl-dw8l4, resource: bindings, ignored listing per whitelist
Aug 23 10:38:31.206: INFO: namespace e2e-tests-kubectl-dw8l4 deletion completed in 6.257810957s

• [SLOW TEST:22.690 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl run default
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create an rc or deployment from an image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Burst scaling should run to completion even with unhealthy pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 23 10:38:31.207: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74
STEP: Creating service test in namespace e2e-tests-statefulset-nxqqv
[It] Burst scaling should run to completion even with unhealthy pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating stateful set ss in namespace e2e-tests-statefulset-nxqqv
STEP: Waiting until all stateful set ss replicas will be running in namespace e2e-tests-statefulset-nxqqv
Aug 23 10:38:32.055: INFO: Found 0 stateful pods, waiting for 1
Aug 23 10:38:42.058: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod
Aug 23 10:38:42.061: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nxqqv ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Aug 23 10:38:42.331: INFO: stderr: "I0823 10:38:42.195441    2872 log.go:172] (0xc0001386e0) (0xc00070e640) Create stream\nI0823 10:38:42.195488    2872 log.go:172] (0xc0001386e0) (0xc00070e640) Stream added, broadcasting: 1\nI0823 10:38:42.197800    2872 log.go:172] (0xc0001386e0) Reply frame received for 1\nI0823 10:38:42.197831    2872 log.go:172] (0xc0001386e0) (0xc0005aed20) Create stream\nI0823 10:38:42.197839    2872 log.go:172] (0xc0001386e0) (0xc0005aed20) Stream added, broadcasting: 3\nI0823 10:38:42.198529    2872 log.go:172] (0xc0001386e0) Reply frame received for 3\nI0823 10:38:42.198595    2872 log.go:172] (0xc0001386e0) (0xc000270000) Create stream\nI0823 10:38:42.198618    2872 log.go:172] (0xc0001386e0) (0xc000270000) Stream added, broadcasting: 5\nI0823 10:38:42.199437    2872 log.go:172] (0xc0001386e0) Reply frame received for 5\nI0823 10:38:42.325567    2872 log.go:172] (0xc0001386e0) Data frame received for 3\nI0823 10:38:42.325590    2872 log.go:172] (0xc0005aed20) (3) Data frame handling\nI0823 10:38:42.325606    2872 log.go:172] (0xc0005aed20) (3) Data frame sent\nI0823 10:38:42.325614    2872 log.go:172] (0xc0001386e0) Data frame received for 3\nI0823 10:38:42.325620    2872 log.go:172] (0xc0005aed20) (3) Data frame handling\nI0823 10:38:42.325703    2872 log.go:172] (0xc0001386e0) Data frame received for 5\nI0823 10:38:42.325729    2872 log.go:172] (0xc000270000) (5) Data frame handling\nI0823 10:38:42.327669    2872 log.go:172] (0xc0001386e0) Data frame received for 1\nI0823 10:38:42.327686    2872 log.go:172] (0xc00070e640) (1) Data frame handling\nI0823 10:38:42.327704    2872 log.go:172] (0xc00070e640) (1) Data frame sent\nI0823 10:38:42.327715    2872 log.go:172] (0xc0001386e0) (0xc00070e640) Stream removed, broadcasting: 1\nI0823 10:38:42.327920    2872 log.go:172] (0xc0001386e0) Go away received\nI0823 10:38:42.327972    2872 log.go:172] (0xc0001386e0) (0xc00070e640) Stream removed, broadcasting: 1\nI0823 10:38:42.327996    2872 log.go:172] (0xc0001386e0) (0xc0005aed20) Stream removed, broadcasting: 3\nI0823 10:38:42.328011    2872 log.go:172] (0xc0001386e0) (0xc000270000) Stream removed, broadcasting: 5\n"
Aug 23 10:38:42.331: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Aug 23 10:38:42.331: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Aug 23 10:38:42.335: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true
Aug 23 10:38:52.685: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Aug 23 10:38:52.685: INFO: Waiting for statefulset status.replicas updated to 0
Aug 23 10:38:54.834: INFO: POD   NODE            PHASE    GRACE  CONDITIONS
Aug 23 10:38:54.834: INFO: ss-0  hunter-worker2  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-23 10:38:32 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-23 10:38:42 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-23 10:38:42 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-23 10:38:32 +0000 UTC  }]
Aug 23 10:38:54.835: INFO: 
Aug 23 10:38:54.835: INFO: StatefulSet ss has not reached scale 3, at 1
Aug 23 10:38:56.734: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.532258268s
Aug 23 10:38:57.752: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.633107355s
Aug 23 10:38:59.320: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.614627657s
Aug 23 10:39:00.642: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.046616392s
Aug 23 10:39:02.213: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.724461615s
Aug 23 10:39:03.596: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.153771868s
STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace e2e-tests-statefulset-nxqqv
Aug 23 10:39:04.996: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nxqqv ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 23 10:39:06.159: INFO: stderr: "I0823 10:39:06.077470    2894 log.go:172] (0xc0006d2370) (0xc0008775e0) Create stream\nI0823 10:39:06.077566    2894 log.go:172] (0xc0006d2370) (0xc0008775e0) Stream added, broadcasting: 1\nI0823 10:39:06.081153    2894 log.go:172] (0xc0006d2370) Reply frame received for 1\nI0823 10:39:06.081193    2894 log.go:172] (0xc0006d2370) (0xc000326a00) Create stream\nI0823 10:39:06.081209    2894 log.go:172] (0xc0006d2370) (0xc000326a00) Stream added, broadcasting: 3\nI0823 10:39:06.082270    2894 log.go:172] (0xc0006d2370) Reply frame received for 3\nI0823 10:39:06.082356    2894 log.go:172] (0xc0006d2370) (0xc0003e1400) Create stream\nI0823 10:39:06.082378    2894 log.go:172] (0xc0006d2370) (0xc0003e1400) Stream added, broadcasting: 5\nI0823 10:39:06.083780    2894 log.go:172] (0xc0006d2370) Reply frame received for 5\nI0823 10:39:06.150342    2894 log.go:172] (0xc0006d2370) Data frame received for 5\nI0823 10:39:06.150413    2894 log.go:172] (0xc0006d2370) Data frame received for 3\nI0823 10:39:06.150464    2894 log.go:172] (0xc000326a00) (3) Data frame handling\nI0823 10:39:06.150487    2894 log.go:172] (0xc000326a00) (3) Data frame sent\nI0823 10:39:06.150498    2894 log.go:172] (0xc0006d2370) Data frame received for 3\nI0823 10:39:06.150505    2894 log.go:172] (0xc000326a00) (3) Data frame handling\nI0823 10:39:06.150544    2894 log.go:172] (0xc0003e1400) (5) Data frame handling\nI0823 10:39:06.152348    2894 log.go:172] (0xc0006d2370) Data frame received for 1\nI0823 10:39:06.152372    2894 log.go:172] (0xc0008775e0) (1) Data frame handling\nI0823 10:39:06.152382    2894 log.go:172] (0xc0008775e0) (1) Data frame sent\nI0823 10:39:06.152392    2894 log.go:172] (0xc0006d2370) (0xc0008775e0) Stream removed, broadcasting: 1\nI0823 10:39:06.152498    2894 log.go:172] (0xc0006d2370) Go away received\nI0823 10:39:06.152538    2894 log.go:172] (0xc0006d2370) (0xc0008775e0) Stream removed, broadcasting: 1\nI0823 10:39:06.152553    2894 log.go:172] (0xc0006d2370) (0xc000326a00) Stream removed, broadcasting: 3\nI0823 10:39:06.152563    2894 log.go:172] (0xc0006d2370) (0xc0003e1400) Stream removed, broadcasting: 5\n"
Aug 23 10:39:06.159: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Aug 23 10:39:06.159: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Aug 23 10:39:06.159: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nxqqv ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 23 10:39:06.371: INFO: stderr: "I0823 10:39:06.296284    2916 log.go:172] (0xc0005980b0) (0xc00041d4a0) Create stream\nI0823 10:39:06.296339    2916 log.go:172] (0xc0005980b0) (0xc00041d4a0) Stream added, broadcasting: 1\nI0823 10:39:06.298288    2916 log.go:172] (0xc0005980b0) Reply frame received for 1\nI0823 10:39:06.298335    2916 log.go:172] (0xc0005980b0) (0xc0004ae000) Create stream\nI0823 10:39:06.298345    2916 log.go:172] (0xc0005980b0) (0xc0004ae000) Stream added, broadcasting: 3\nI0823 10:39:06.299118    2916 log.go:172] (0xc0005980b0) Reply frame received for 3\nI0823 10:39:06.299162    2916 log.go:172] (0xc0005980b0) (0xc0002f4000) Create stream\nI0823 10:39:06.299175    2916 log.go:172] (0xc0005980b0) (0xc0002f4000) Stream added, broadcasting: 5\nI0823 10:39:06.299923    2916 log.go:172] (0xc0005980b0) Reply frame received for 5\nI0823 10:39:06.362077    2916 log.go:172] (0xc0005980b0) Data frame received for 3\nI0823 10:39:06.362121    2916 log.go:172] (0xc0004ae000) (3) Data frame handling\nI0823 10:39:06.362140    2916 log.go:172] (0xc0004ae000) (3) Data frame sent\nI0823 10:39:06.362150    2916 log.go:172] (0xc0005980b0) Data frame received for 3\nI0823 10:39:06.362159    2916 log.go:172] (0xc0004ae000) (3) Data frame handling\nI0823 10:39:06.362194    2916 log.go:172] (0xc0005980b0) Data frame received for 5\nI0823 10:39:06.362207    2916 log.go:172] (0xc0002f4000) (5) Data frame handling\nI0823 10:39:06.362226    2916 log.go:172] (0xc0002f4000) (5) Data frame sent\nI0823 10:39:06.362240    2916 log.go:172] (0xc0005980b0) Data frame received for 5\nI0823 10:39:06.362253    2916 log.go:172] (0xc0002f4000) (5) Data frame handling\nmv: can't rename '/tmp/index.html': No such file or directory\nI0823 10:39:06.364307    2916 log.go:172] (0xc0005980b0) Data frame received for 1\nI0823 10:39:06.364334    2916 log.go:172] (0xc00041d4a0) (1) Data frame handling\nI0823 10:39:06.364346    2916 log.go:172] (0xc00041d4a0) (1) Data frame sent\nI0823 10:39:06.364355    2916 log.go:172] (0xc0005980b0) (0xc00041d4a0) Stream removed, broadcasting: 1\nI0823 10:39:06.364373    2916 log.go:172] (0xc0005980b0) Go away received\nI0823 10:39:06.364547    2916 log.go:172] (0xc0005980b0) (0xc00041d4a0) Stream removed, broadcasting: 1\nI0823 10:39:06.364563    2916 log.go:172] (0xc0005980b0) (0xc0004ae000) Stream removed, broadcasting: 3\nI0823 10:39:06.364571    2916 log.go:172] (0xc0005980b0) (0xc0002f4000) Stream removed, broadcasting: 5\n"
Aug 23 10:39:06.371: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Aug 23 10:39:06.371: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Aug 23 10:39:06.371: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nxqqv ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 23 10:39:06.655: INFO: rc: 1
Aug 23 10:39:06.655: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nxqqv ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    error: unable to upgrade connection: container not found ("nginx")
 []  0xc000aedce0 exit status 1   true [0xc0007f8128 0xc0007f8140 0xc0007f8158] [0xc0007f8128 0xc0007f8140 0xc0007f8158] [0xc0007f8138 0xc0007f8150] [0x935700 0x935700] 0xc0022f00c0 }:
Command stdout:

stderr:
error: unable to upgrade connection: container not found ("nginx")

error:
exit status 1

Aug 23 10:39:16.655: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nxqqv ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 23 10:39:17.223: INFO: stderr: "I0823 10:39:17.155250    2962 log.go:172] (0xc0008562c0) (0xc000736640) Create stream\nI0823 10:39:17.155304    2962 log.go:172] (0xc0008562c0) (0xc000736640) Stream added, broadcasting: 1\nI0823 10:39:17.157496    2962 log.go:172] (0xc0008562c0) Reply frame received for 1\nI0823 10:39:17.157528    2962 log.go:172] (0xc0008562c0) (0xc00066edc0) Create stream\nI0823 10:39:17.157539    2962 log.go:172] (0xc0008562c0) (0xc00066edc0) Stream added, broadcasting: 3\nI0823 10:39:17.158308    2962 log.go:172] (0xc0008562c0) Reply frame received for 3\nI0823 10:39:17.158338    2962 log.go:172] (0xc0008562c0) (0xc0006ae000) Create stream\nI0823 10:39:17.158347    2962 log.go:172] (0xc0008562c0) (0xc0006ae000) Stream added, broadcasting: 5\nI0823 10:39:17.159149    2962 log.go:172] (0xc0008562c0) Reply frame received for 5\nI0823 10:39:17.215083    2962 log.go:172] (0xc0008562c0) Data frame received for 5\nI0823 10:39:17.215123    2962 log.go:172] (0xc0006ae000) (5) Data frame handling\nI0823 10:39:17.215135    2962 log.go:172] (0xc0006ae000) (5) Data frame sent\nI0823 10:39:17.215143    2962 log.go:172] (0xc0008562c0) Data frame received for 5\nI0823 10:39:17.215150    2962 log.go:172] (0xc0006ae000) (5) Data frame handling\nmv: can't rename '/tmp/index.html': No such file or directory\nI0823 10:39:17.215169    2962 log.go:172] (0xc0008562c0) Data frame received for 3\nI0823 10:39:17.215181    2962 log.go:172] (0xc00066edc0) (3) Data frame handling\nI0823 10:39:17.215197    2962 log.go:172] (0xc00066edc0) (3) Data frame sent\nI0823 10:39:17.215210    2962 log.go:172] (0xc0008562c0) Data frame received for 3\nI0823 10:39:17.215216    2962 log.go:172] (0xc00066edc0) (3) Data frame handling\nI0823 10:39:17.216462    2962 log.go:172] (0xc0008562c0) Data frame received for 1\nI0823 10:39:17.216486    2962 log.go:172] (0xc000736640) (1) Data frame handling\nI0823 10:39:17.216496    2962 log.go:172] (0xc000736640) (1) Data frame sent\nI0823 10:39:17.216505    2962 log.go:172] (0xc0008562c0) (0xc000736640) Stream removed, broadcasting: 1\nI0823 10:39:17.216626    2962 log.go:172] (0xc0008562c0) (0xc000736640) Stream removed, broadcasting: 1\nI0823 10:39:17.216644    2962 log.go:172] (0xc0008562c0) (0xc00066edc0) Stream removed, broadcasting: 3\nI0823 10:39:17.216858    2962 log.go:172] (0xc0008562c0) (0xc0006ae000) Stream removed, broadcasting: 5\n"
Aug 23 10:39:17.224: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Aug 23 10:39:17.224: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Aug 23 10:39:17.228: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Aug 23 10:39:17.228: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Aug 23 10:39:17.228: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Scale down will not halt with unhealthy stateful pod
Aug 23 10:39:17.231: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nxqqv ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Aug 23 10:39:17.848: INFO: stderr: "I0823 10:39:17.331220    2983 log.go:172] (0xc00074e2c0) (0xc00060f4a0) Create stream\nI0823 10:39:17.331277    2983 log.go:172] (0xc00074e2c0) (0xc00060f4a0) Stream added, broadcasting: 1\nI0823 10:39:17.333803    2983 log.go:172] (0xc00074e2c0) Reply frame received for 1\nI0823 10:39:17.333863    2983 log.go:172] (0xc00074e2c0) (0xc000342000) Create stream\nI0823 10:39:17.333885    2983 log.go:172] (0xc00074e2c0) (0xc000342000) Stream added, broadcasting: 3\nI0823 10:39:17.334841    2983 log.go:172] (0xc00074e2c0) Reply frame received for 3\nI0823 10:39:17.334916    2983 log.go:172] (0xc00074e2c0) (0xc00060f540) Create stream\nI0823 10:39:17.334989    2983 log.go:172] (0xc00074e2c0) (0xc00060f540) Stream added, broadcasting: 5\nI0823 10:39:17.335918    2983 log.go:172] (0xc00074e2c0) Reply frame received for 5\nI0823 10:39:17.841770    2983 log.go:172] (0xc00074e2c0) Data frame received for 5\nI0823 10:39:17.841820    2983 log.go:172] (0xc00060f540) (5) Data frame handling\nI0823 10:39:17.841854    2983 log.go:172] (0xc00074e2c0) Data frame received for 3\nI0823 10:39:17.841877    2983 log.go:172] (0xc000342000) (3) Data frame handling\nI0823 10:39:17.841895    2983 log.go:172] (0xc000342000) (3) Data frame sent\nI0823 10:39:17.841907    2983 log.go:172] (0xc00074e2c0) Data frame received for 3\nI0823 10:39:17.841918    2983 log.go:172] (0xc000342000) (3) Data frame handling\nI0823 10:39:17.842440    2983 log.go:172] (0xc00074e2c0) Data frame received for 1\nI0823 10:39:17.842476    2983 log.go:172] (0xc00060f4a0) (1) Data frame handling\nI0823 10:39:17.842492    2983 log.go:172] (0xc00060f4a0) (1) Data frame sent\nI0823 10:39:17.842515    2983 log.go:172] (0xc00074e2c0) (0xc00060f4a0) Stream removed, broadcasting: 1\nI0823 10:39:17.842541    2983 log.go:172] (0xc00074e2c0) Go away received\nI0823 10:39:17.842677    2983 log.go:172] (0xc00074e2c0) (0xc00060f4a0) Stream removed, broadcasting: 1\nI0823 10:39:17.842693    2983 log.go:172] (0xc00074e2c0) (0xc000342000) Stream removed, broadcasting: 3\nI0823 10:39:17.842711    2983 log.go:172] (0xc00074e2c0) (0xc00060f540) Stream removed, broadcasting: 5\n"
Aug 23 10:39:17.848: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Aug 23 10:39:17.848: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Aug 23 10:39:17.848: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nxqqv ss-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Aug 23 10:39:18.790: INFO: stderr: "I0823 10:39:18.462417    3004 log.go:172] (0xc00015c6e0) (0xc00074a640) Create stream\nI0823 10:39:18.462474    3004 log.go:172] (0xc00015c6e0) (0xc00074a640) Stream added, broadcasting: 1\nI0823 10:39:18.464288    3004 log.go:172] (0xc00015c6e0) Reply frame received for 1\nI0823 10:39:18.464316    3004 log.go:172] (0xc00015c6e0) (0xc00066adc0) Create stream\nI0823 10:39:18.464331    3004 log.go:172] (0xc00015c6e0) (0xc00066adc0) Stream added, broadcasting: 3\nI0823 10:39:18.465283    3004 log.go:172] (0xc00015c6e0) Reply frame received for 3\nI0823 10:39:18.465309    3004 log.go:172] (0xc00015c6e0) (0xc0002ce000) Create stream\nI0823 10:39:18.465318    3004 log.go:172] (0xc00015c6e0) (0xc0002ce000) Stream added, broadcasting: 5\nI0823 10:39:18.466092    3004 log.go:172] (0xc00015c6e0) Reply frame received for 5\nI0823 10:39:18.779238    3004 log.go:172] (0xc00015c6e0) Data frame received for 5\nI0823 10:39:18.779287    3004 log.go:172] (0xc0002ce000) (5) Data frame handling\nI0823 10:39:18.779318    3004 log.go:172] (0xc00015c6e0) Data frame received for 3\nI0823 10:39:18.779333    3004 log.go:172] (0xc00066adc0) (3) Data frame handling\nI0823 10:39:18.779352    3004 log.go:172] (0xc00066adc0) (3) Data frame sent\nI0823 10:39:18.779371    3004 log.go:172] (0xc00015c6e0) Data frame received for 3\nI0823 10:39:18.779395    3004 log.go:172] (0xc00066adc0) (3) Data frame handling\nI0823 10:39:18.784099    3004 log.go:172] (0xc00015c6e0) Data frame received for 1\nI0823 10:39:18.784121    3004 log.go:172] (0xc00074a640) (1) Data frame handling\nI0823 10:39:18.784140    3004 log.go:172] (0xc00074a640) (1) Data frame sent\nI0823 10:39:18.784161    3004 log.go:172] (0xc00015c6e0) (0xc00074a640) Stream removed, broadcasting: 1\nI0823 10:39:18.784174    3004 log.go:172] (0xc00015c6e0) Go away received\nI0823 10:39:18.784413    3004 log.go:172] (0xc00015c6e0) (0xc00074a640) Stream removed, broadcasting: 1\nI0823 10:39:18.784434    3004 log.go:172] (0xc00015c6e0) (0xc00066adc0) Stream removed, broadcasting: 3\nI0823 10:39:18.784444    3004 log.go:172] (0xc00015c6e0) (0xc0002ce000) Stream removed, broadcasting: 5\n"
Aug 23 10:39:18.790: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Aug 23 10:39:18.790: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Aug 23 10:39:18.790: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nxqqv ss-2 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Aug 23 10:39:21.097: INFO: stderr: "I0823 10:39:19.422992    3027 log.go:172] (0xc00087c2c0) (0xc0006515e0) Create stream\nI0823 10:39:19.423064    3027 log.go:172] (0xc00087c2c0) (0xc0006515e0) Stream added, broadcasting: 1\nI0823 10:39:19.428214    3027 log.go:172] (0xc00087c2c0) Reply frame received for 1\nI0823 10:39:19.428267    3027 log.go:172] (0xc00087c2c0) (0xc00037c5a0) Create stream\nI0823 10:39:19.428276    3027 log.go:172] (0xc00087c2c0) (0xc00037c5a0) Stream added, broadcasting: 3\nI0823 10:39:19.429202    3027 log.go:172] (0xc00087c2c0) Reply frame received for 3\nI0823 10:39:19.429237    3027 log.go:172] (0xc00087c2c0) (0xc000651680) Create stream\nI0823 10:39:19.429252    3027 log.go:172] (0xc00087c2c0) (0xc000651680) Stream added, broadcasting: 5\nI0823 10:39:19.430095    3027 log.go:172] (0xc00087c2c0) Reply frame received for 5\nI0823 10:39:21.085233    3027 log.go:172] (0xc00087c2c0) Data frame received for 5\nI0823 10:39:21.085289    3027 log.go:172] (0xc000651680) (5) Data frame handling\nI0823 10:39:21.085324    3027 log.go:172] (0xc00087c2c0) Data frame received for 3\nI0823 10:39:21.085339    3027 log.go:172] (0xc00037c5a0) (3) Data frame handling\nI0823 10:39:21.085365    3027 log.go:172] (0xc00037c5a0) (3) Data frame sent\nI0823 10:39:21.085382    3027 log.go:172] (0xc00087c2c0) Data frame received for 3\nI0823 10:39:21.085396    3027 log.go:172] (0xc00037c5a0) (3) Data frame handling\nI0823 10:39:21.086455    3027 log.go:172] (0xc00087c2c0) Data frame received for 1\nI0823 10:39:21.086477    3027 log.go:172] (0xc0006515e0) (1) Data frame handling\nI0823 10:39:21.086504    3027 log.go:172] (0xc0006515e0) (1) Data frame sent\nI0823 10:39:21.086536    3027 log.go:172] (0xc00087c2c0) (0xc0006515e0) Stream removed, broadcasting: 1\nI0823 10:39:21.086566    3027 log.go:172] (0xc00087c2c0) Go away received\nI0823 10:39:21.086755    3027 log.go:172] (0xc00087c2c0) (0xc0006515e0) Stream removed, broadcasting: 1\nI0823 10:39:21.086779    3027 log.go:172] (0xc00087c2c0) (0xc00037c5a0) Stream removed, broadcasting: 3\nI0823 10:39:21.086799    3027 log.go:172] (0xc00087c2c0) (0xc000651680) Stream removed, broadcasting: 5\n"
Aug 23 10:39:21.098: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Aug 23 10:39:21.098: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Aug 23 10:39:21.098: INFO: Waiting for statefulset status.replicas updated to 0
Aug 23 10:39:21.327: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 1
Aug 23 10:39:31.907: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Aug 23 10:39:31.907: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false
Aug 23 10:39:31.907: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false
Aug 23 10:39:33.489: INFO: POD   NODE            PHASE    GRACE  CONDITIONS
Aug 23 10:39:33.489: INFO: ss-0  hunter-worker2  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-23 10:38:32 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-23 10:39:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-23 10:39:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-23 10:38:32 +0000 UTC  }]
Aug 23 10:39:33.490: INFO: ss-1  hunter-worker   Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-23 10:38:56 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-23 10:39:19 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-23 10:39:19 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-23 10:38:55 +0000 UTC  }]
Aug 23 10:39:33.490: INFO: ss-2  hunter-worker2  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-23 10:38:56 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-23 10:39:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-23 10:39:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-23 10:38:55 +0000 UTC  }]
Aug 23 10:39:33.490: INFO: 
Aug 23 10:39:33.490: INFO: StatefulSet ss has not reached scale 0, at 3
Aug 23 10:39:35.105: INFO: POD   NODE            PHASE    GRACE  CONDITIONS
Aug 23 10:39:35.105: INFO: ss-0  hunter-worker2  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-23 10:38:32 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-23 10:39:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-23 10:39:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-23 10:38:32 +0000 UTC  }]
Aug 23 10:39:35.105: INFO: ss-1  hunter-worker   Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-23 10:38:56 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-23 10:39:19 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-23 10:39:19 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-23 10:38:55 +0000 UTC  }]
Aug 23 10:39:35.105: INFO: ss-2  hunter-worker2  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-23 10:38:56 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-23 10:39:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-23 10:39:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-23 10:38:55 +0000 UTC  }]
Aug 23 10:39:35.105: INFO: 
Aug 23 10:39:35.105: INFO: StatefulSet ss has not reached scale 0, at 3
Aug 23 10:39:36.900: INFO: POD   NODE            PHASE    GRACE  CONDITIONS
Aug 23 10:39:36.900: INFO: ss-0  hunter-worker2  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-23 10:38:32 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-23 10:39:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-23 10:39:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-23 10:38:32 +0000 UTC  }]
Aug 23 10:39:36.900: INFO: ss-1  hunter-worker   Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-23 10:38:56 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-23 10:39:19 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-23 10:39:19 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-23 10:38:55 +0000 UTC  }]
Aug 23 10:39:36.900: INFO: ss-2  hunter-worker2  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-23 10:38:56 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-23 10:39:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-23 10:39:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-23 10:38:55 +0000 UTC  }]
Aug 23 10:39:36.900: INFO: 
Aug 23 10:39:36.900: INFO: StatefulSet ss has not reached scale 0, at 3
Aug 23 10:39:38.355: INFO: POD   NODE            PHASE    GRACE  CONDITIONS
Aug 23 10:39:38.355: INFO: ss-0  hunter-worker2  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-23 10:38:32 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-23 10:39:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-23 10:39:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-23 10:38:32 +0000 UTC  }]
Aug 23 10:39:38.356: INFO: ss-1  hunter-worker   Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-23 10:38:56 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-23 10:39:19 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-23 10:39:19 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-23 10:38:55 +0000 UTC  }]
Aug 23 10:39:38.356: INFO: ss-2  hunter-worker2  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-23 10:38:56 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-23 10:39:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-23 10:39:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-23 10:38:55 +0000 UTC  }]
Aug 23 10:39:38.356: INFO: 
Aug 23 10:39:38.356: INFO: StatefulSet ss has not reached scale 0, at 3
Aug 23 10:39:39.787: INFO: POD   NODE            PHASE    GRACE  CONDITIONS
Aug 23 10:39:39.788: INFO: ss-0  hunter-worker2  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-23 10:38:32 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-23 10:39:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-23 10:39:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-23 10:38:32 +0000 UTC  }]
Aug 23 10:39:39.788: INFO: ss-1  hunter-worker   Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-23 10:38:56 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-23 10:39:19 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-23 10:39:19 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-23 10:38:55 +0000 UTC  }]
Aug 23 10:39:39.788: INFO: ss-2  hunter-worker2  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-23 10:38:56 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-23 10:39:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-23 10:39:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-23 10:38:55 +0000 UTC  }]
Aug 23 10:39:39.788: INFO: 
Aug 23 10:39:39.788: INFO: StatefulSet ss has not reached scale 0, at 3
Aug 23 10:39:41.415: INFO: POD   NODE            PHASE    GRACE  CONDITIONS
Aug 23 10:39:41.415: INFO: ss-0  hunter-worker2  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-23 10:38:32 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-23 10:39:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-23 10:39:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-23 10:38:32 +0000 UTC  }]
Aug 23 10:39:41.415: INFO: ss-1  hunter-worker   Pending  0s     [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-23 10:38:56 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-23 10:39:19 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-23 10:39:19 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-23 10:38:55 +0000 UTC  }]
Aug 23 10:39:41.415: INFO: ss-2  hunter-worker2  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-23 10:38:56 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-23 10:39:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-23 10:39:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-23 10:38:55 +0000 UTC  }]
Aug 23 10:39:41.415: INFO: 
Aug 23 10:39:41.415: INFO: StatefulSet ss has not reached scale 0, at 3
Aug 23 10:39:42.633: INFO: POD   NODE            PHASE    GRACE  CONDITIONS
Aug 23 10:39:42.633: INFO: ss-0  hunter-worker2  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-23 10:38:32 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-23 10:39:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-23 10:39:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-23 10:38:32 +0000 UTC  }]
Aug 23 10:39:42.633: INFO: 
Aug 23 10:39:42.633: INFO: StatefulSet ss has not reached scale 0, at 1
STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacee2e-tests-statefulset-nxqqv
Aug 23 10:39:43.637: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nxqqv ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 23 10:39:43.768: INFO: rc: 1
Aug 23 10:39:43.768: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nxqqv ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    error: unable to upgrade connection: container not found ("nginx")
 []  0xc0013d95c0 exit status 1   true [0xc001554270 0xc001554288 0xc0015542a0] [0xc001554270 0xc001554288 0xc0015542a0] [0xc001554280 0xc001554298] [0x935700 0x935700] 0xc00293bb00 }:
Command stdout:

stderr:
error: unable to upgrade connection: container not found ("nginx")

error:
exit status 1

Aug 23 10:39:53.768: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nxqqv ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 23 10:39:53.864: INFO: rc: 1
Aug 23 10:39:53.864: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nxqqv ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc000d4e8a0 exit status 1   true [0xc0007f8298 0xc0007f82b0 0xc0007f82c8] [0xc0007f8298 0xc0007f82b0 0xc0007f82c8] [0xc0007f82a8 0xc0007f82c0] [0x935700 0x935700] 0xc001486060 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Aug 23 10:40:03.865: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nxqqv ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 23 10:40:03.958: INFO: rc: 1
Aug 23 10:40:03.959: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nxqqv ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc000407f20 exit status 1   true [0xc00016f590 0xc00016f5a8 0xc00016f5d8] [0xc00016f590 0xc00016f5a8 0xc00016f5d8] [0xc00016f5a0 0xc00016f5c0] [0x935700 0x935700] 0xc002860f00 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Aug 23 10:40:13.959: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nxqqv ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 23 10:40:14.063: INFO: rc: 1
Aug 23 10:40:14.063: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nxqqv ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc000976bd0 exit status 1   true [0xc000117428 0xc0001174b8 0xc000117550] [0xc000117428 0xc0001174b8 0xc000117550] [0xc000117498 0xc000117520] [0x935700 0x935700] 0xc002224360 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Aug 23 10:40:24.064: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nxqqv ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 23 10:40:24.158: INFO: rc: 1
Aug 23 10:40:24.158: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nxqqv ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc000976d20 exit status 1   true [0xc0001175a8 0xc0001175f8 0xc000117640] [0xc0001175a8 0xc0001175f8 0xc000117640] [0xc0001175d8 0xc000117630] [0x935700 0x935700] 0xc002224660 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Aug 23 10:40:34.158: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nxqqv ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 23 10:40:34.255: INFO: rc: 1
Aug 23 10:40:34.255: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nxqqv ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001a56330 exit status 1   true [0xc0007f8000 0xc0007f8018 0xc0007f8030] [0xc0007f8000 0xc0007f8018 0xc0007f8030] [0xc0007f8010 0xc0007f8028] [0x935700 0x935700] 0xc0026b81e0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Aug 23 10:40:44.256: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nxqqv ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 23 10:40:44.660: INFO: rc: 1
Aug 23 10:40:44.660: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nxqqv ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc000407bf0 exit status 1   true [0xc00016e000 0xc00016e220 0xc00016e260] [0xc00016e000 0xc00016e220 0xc00016e260] [0xc00016e208 0xc00016e240] [0x935700 0x935700] 0xc0017b2480 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Aug 23 10:40:54.661: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nxqqv ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 23 10:40:54.833: INFO: rc: 1
Aug 23 10:40:54.833: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nxqqv ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001a56510 exit status 1   true [0xc0007f8038 0xc0007f8050 0xc0007f8068] [0xc0007f8038 0xc0007f8050 0xc0007f8068] [0xc0007f8048 0xc0007f8060] [0x935700 0x935700] 0xc0026b8480 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Aug 23 10:41:04.833: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nxqqv ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 23 10:41:05.057: INFO: rc: 1
Aug 23 10:41:05.057: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nxqqv ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc000407d70 exit status 1   true [0xc00016e280 0xc00016e2f0 0xc00016e368] [0xc00016e280 0xc00016e2f0 0xc00016e368] [0xc00016e2e8 0xc00016e358] [0x935700 0x935700] 0xc0017b2ba0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Aug 23 10:41:15.057: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nxqqv ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 23 10:41:15.130: INFO: rc: 1
Aug 23 10:41:15.131: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nxqqv ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc000407f50 exit status 1   true [0xc00016e380 0xc00016e3d8 0xc00016e450] [0xc00016e380 0xc00016e3d8 0xc00016e450] [0xc00016e3a8 0xc00016e430] [0x935700 0x935700] 0xc0017b31a0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Aug 23 10:41:25.131: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nxqqv ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 23 10:41:25.296: INFO: rc: 1
Aug 23 10:41:25.296: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nxqqv ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001a56750 exit status 1   true [0xc0007f8070 0xc0007f8088 0xc0007f80a0] [0xc0007f8070 0xc0007f8088 0xc0007f80a0] [0xc0007f8080 0xc0007f8098] [0x935700 0x935700] 0xc0026b8720 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Aug 23 10:41:35.296: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nxqqv ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 23 10:41:35.541: INFO: rc: 1
Aug 23 10:41:35.541: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nxqqv ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc000aec1b0 exit status 1   true [0xc00016e460 0xc00016e520 0xc00016e5a0] [0xc00016e460 0xc00016e520 0xc00016e5a0] [0xc00016e4d0 0xc00016e548] [0x935700 0x935700] 0xc0017b3620 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Aug 23 10:41:45.542: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nxqqv ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 23 10:41:45.633: INFO: rc: 1
Aug 23 10:41:45.633: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nxqqv ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc000aec330 exit status 1   true [0xc00016e5e8 0xc00016e660 0xc00016e6a8] [0xc00016e5e8 0xc00016e660 0xc00016e6a8] [0xc00016e658 0xc00016e690] [0x935700 0x935700] 0xc002012000 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Aug 23 10:41:55.634: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nxqqv ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 23 10:41:55.731: INFO: rc: 1
Aug 23 10:41:55.731: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nxqqv ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001a56900 exit status 1   true [0xc0007f80a8 0xc0007f80c0 0xc0007f80d8] [0xc0007f80a8 0xc0007f80c0 0xc0007f80d8] [0xc0007f80b8 0xc0007f80d0] [0x935700 0x935700] 0xc0026b89c0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Aug 23 10:42:05.731: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nxqqv ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 23 10:42:05.973: INFO: rc: 1
Aug 23 10:42:05.974: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nxqqv ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc000aec480 exit status 1   true [0xc00016e6d0 0xc00016e768 0xc00016e7b0] [0xc00016e6d0 0xc00016e768 0xc00016e7b0] [0xc00016e748 0xc00016e788] [0x935700 0x935700] 0xc002012ae0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Aug 23 10:42:15.974: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nxqqv ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 23 10:42:16.076: INFO: rc: 1
Aug 23 10:42:16.076: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nxqqv ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc000463f20 exit status 1   true [0xc001554010 0xc001554028 0xc001554040] [0xc001554010 0xc001554028 0xc001554040] [0xc001554020 0xc001554038] [0x935700 0x935700] 0xc0022f0360 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Aug 23 10:42:26.076: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nxqqv ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 23 10:42:26.162: INFO: rc: 1
Aug 23 10:42:26.162: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nxqqv ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001a56ab0 exit status 1   true [0xc0007f80e0 0xc0007f80f8 0xc0007f8110] [0xc0007f80e0 0xc0007f80f8 0xc0007f8110] [0xc0007f80f0 0xc0007f8108] [0x935700 0x935700] 0xc0026b8c60 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Aug 23 10:42:36.162: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nxqqv ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 23 10:42:36.253: INFO: rc: 1
Aug 23 10:42:36.253: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nxqqv ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc000407c20 exit status 1   true [0xc00016e000 0xc00016e220 0xc00016e260] [0xc00016e000 0xc00016e220 0xc00016e260] [0xc00016e208 0xc00016e240] [0x935700 0x935700] 0xc0017b2480 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Aug 23 10:42:46.254: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nxqqv ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 23 10:42:46.515: INFO: rc: 1
Aug 23 10:42:46.515: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nxqqv ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc000407da0 exit status 1   true [0xc00016e280 0xc00016e2f0 0xc00016e368] [0xc00016e280 0xc00016e2f0 0xc00016e368] [0xc00016e2e8 0xc00016e358] [0x935700 0x935700] 0xc0017b2ba0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Aug 23 10:42:56.516: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nxqqv ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 23 10:42:56.606: INFO: rc: 1
Aug 23 10:42:56.606: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nxqqv ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc000407f20 exit status 1   true [0xc00016e380 0xc00016e3d8 0xc00016e450] [0xc00016e380 0xc00016e3d8 0xc00016e450] [0xc00016e3a8 0xc00016e430] [0x935700 0x935700] 0xc0017b31a0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Aug 23 10:43:06.607: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nxqqv ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 23 10:43:06.712: INFO: rc: 1
Aug 23 10:43:06.713: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nxqqv ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc000aec270 exit status 1   true [0xc0007f8000 0xc0007f8018 0xc0007f8030] [0xc0007f8000 0xc0007f8018 0xc0007f8030] [0xc0007f8010 0xc0007f8028] [0x935700 0x935700] 0xc002012900 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Aug 23 10:43:16.713: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nxqqv ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 23 10:43:16.804: INFO: rc: 1
Aug 23 10:43:16.804: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nxqqv ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001800180 exit status 1   true [0xc001554010 0xc001554028 0xc001554040] [0xc001554010 0xc001554028 0xc001554040] [0xc001554020 0xc001554038] [0x935700 0x935700] 0xc0026b81e0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Aug 23 10:43:26.804: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nxqqv ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 23 10:43:26.900: INFO: rc: 1
Aug 23 10:43:26.900: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nxqqv ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0018002a0 exit status 1   true [0xc001554048 0xc001554060 0xc001554078] [0xc001554048 0xc001554060 0xc001554078] [0xc001554058 0xc001554070] [0x935700 0x935700] 0xc0026b8480 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Aug 23 10:43:36.900: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nxqqv ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 23 10:43:37.042: INFO: rc: 1
Aug 23 10:43:37.042: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nxqqv ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001a56360 exit status 1   true [0xc000116070 0xc000116260 0xc0001163e0] [0xc000116070 0xc000116260 0xc0001163e0] [0xc0001161c8 0xc000116348] [0x935700 0x935700] 0xc0022f0360 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Aug 23 10:43:47.042: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nxqqv ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 23 10:43:47.299: INFO: rc: 1
Aug 23 10:43:47.299: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nxqqv ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0018003c0 exit status 1   true [0xc001554080 0xc001554098 0xc0015540b0] [0xc001554080 0xc001554098 0xc0015540b0] [0xc001554090 0xc0015540a8] [0x935700 0x935700] 0xc0026b8720 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Aug 23 10:43:57.300: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nxqqv ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 23 10:43:57.385: INFO: rc: 1
Aug 23 10:43:57.385: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nxqqv ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001800510 exit status 1   true [0xc0015540b8 0xc0015540d0 0xc0015540e8] [0xc0015540b8 0xc0015540d0 0xc0015540e8] [0xc0015540c8 0xc0015540e0] [0x935700 0x935700] 0xc0026b89c0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Aug 23 10:44:07.385: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nxqqv ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 23 10:44:07.619: INFO: rc: 1
Aug 23 10:44:07.619: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nxqqv ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc000463f50 exit status 1   true [0xc00016e460 0xc00016e520 0xc00016e5a0] [0xc00016e460 0xc00016e520 0xc00016e5a0] [0xc00016e4d0 0xc00016e548] [0x935700 0x935700] 0xc0017b3620 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Aug 23 10:44:17.619: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nxqqv ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 23 10:44:17.707: INFO: rc: 1
Aug 23 10:44:17.707: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nxqqv ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001a56570 exit status 1   true [0xc0001163f0 0xc000116438 0xc000116538] [0xc0001163f0 0xc000116438 0xc000116538] [0xc000116410 0xc000116508] [0x935700 0x935700] 0xc0022f0780 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Aug 23 10:44:27.708: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nxqqv ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 23 10:44:27.792: INFO: rc: 1
Aug 23 10:44:27.792: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nxqqv ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0021020f0 exit status 1   true [0xc00016e5e8 0xc00016e660 0xc00016e6a8] [0xc00016e5e8 0xc00016e660 0xc00016e6a8] [0xc00016e658 0xc00016e690] [0x935700 0x935700] 0xc001a94000 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Aug 23 10:44:37.792: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nxqqv ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 23 10:44:37.879: INFO: rc: 1
Aug 23 10:44:37.879: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nxqqv ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc000407bf0 exit status 1   true [0xc00016e000 0xc00016e220 0xc00016e260] [0xc00016e000 0xc00016e220 0xc00016e260] [0xc00016e208 0xc00016e240] [0x935700 0x935700] 0xc0017b2480 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Aug 23 10:44:47.879: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nxqqv ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 23 10:44:47.984: INFO: rc: 1
Aug 23 10:44:47.984: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: 
Aug 23 10:44:47.984: INFO: Scaling statefulset ss to 0
Aug 23 10:44:47.992: INFO: Waiting for statefulset status.replicas updated to 0
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85
Aug 23 10:44:47.994: INFO: Deleting all statefulset in ns e2e-tests-statefulset-nxqqv
Aug 23 10:44:47.996: INFO: Scaling statefulset ss to 0
Aug 23 10:44:48.002: INFO: Waiting for statefulset status.replicas updated to 0
Aug 23 10:44:48.005: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 23 10:44:48.074: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-statefulset-nxqqv" for this suite.
Aug 23 10:44:54.091: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 23 10:44:54.179: INFO: namespace: e2e-tests-statefulset-nxqqv, resource: bindings, ignored listing per whitelist
Aug 23 10:44:54.210: INFO: namespace e2e-tests-statefulset-nxqqv deletion completed in 6.132045598s

• [SLOW TEST:383.003 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    Burst scaling should run to completion even with unhealthy pods [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 23 10:44:54.210: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-volume-map-adf6058a-e52d-11ea-87d5-0242ac11000a
STEP: Creating a pod to test consume configMaps
Aug 23 10:44:54.340: INFO: Waiting up to 5m0s for pod "pod-configmaps-adf91cce-e52d-11ea-87d5-0242ac11000a" in namespace "e2e-tests-configmap-z9gsg" to be "success or failure"
Aug 23 10:44:54.342: INFO: Pod "pod-configmaps-adf91cce-e52d-11ea-87d5-0242ac11000a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.191938ms
Aug 23 10:44:56.505: INFO: Pod "pod-configmaps-adf91cce-e52d-11ea-87d5-0242ac11000a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.165093402s
Aug 23 10:44:58.509: INFO: Pod "pod-configmaps-adf91cce-e52d-11ea-87d5-0242ac11000a": Phase="Running", Reason="", readiness=true. Elapsed: 4.169312937s
Aug 23 10:45:00.513: INFO: Pod "pod-configmaps-adf91cce-e52d-11ea-87d5-0242ac11000a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.173049901s
STEP: Saw pod success
Aug 23 10:45:00.513: INFO: Pod "pod-configmaps-adf91cce-e52d-11ea-87d5-0242ac11000a" satisfied condition "success or failure"
Aug 23 10:45:00.515: INFO: Trying to get logs from node hunter-worker pod pod-configmaps-adf91cce-e52d-11ea-87d5-0242ac11000a container configmap-volume-test: 
STEP: delete the pod
Aug 23 10:45:00.549: INFO: Waiting for pod pod-configmaps-adf91cce-e52d-11ea-87d5-0242ac11000a to disappear
Aug 23 10:45:00.561: INFO: Pod pod-configmaps-adf91cce-e52d-11ea-87d5-0242ac11000a no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 23 10:45:00.561: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-z9gsg" for this suite.
Aug 23 10:45:06.575: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 23 10:45:06.617: INFO: namespace: e2e-tests-configmap-z9gsg, resource: bindings, ignored listing per whitelist
Aug 23 10:45:06.639: INFO: namespace e2e-tests-configmap-z9gsg deletion completed in 6.075240685s

• [SLOW TEST:12.429 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 23 10:45:06.640: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102
[It] should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Aug 23 10:45:07.105: INFO: Requires at least 2 nodes (not -1)
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68
Aug 23 10:45:07.111: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-g62dc/daemonsets","resourceVersion":"1692448"},"items":null}

Aug 23 10:45:07.113: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-g62dc/pods","resourceVersion":"1692448"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 23 10:45:07.123: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-daemonsets-g62dc" for this suite.
Aug 23 10:45:13.233: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 23 10:45:13.243: INFO: namespace: e2e-tests-daemonsets-g62dc, resource: bindings, ignored listing per whitelist
Aug 23 10:45:13.413: INFO: namespace e2e-tests-daemonsets-g62dc deletion completed in 6.288003755s

S [SKIPPING] [6.774 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should rollback without unnecessary restarts [Conformance] [It]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699

  Aug 23 10:45:07.105: Requires at least 2 nodes (not -1)

  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:292
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run job 
  should create a job from an image when restart is OnFailure  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 23 10:45:13.414: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl run job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1454
[It] should create a job from an image when restart is OnFailure  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Aug 23 10:45:13.843: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=e2e-tests-kubectl-5h8nq'
Aug 23 10:45:14.028: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Aug 23 10:45:14.028: INFO: stdout: "job.batch/e2e-test-nginx-job created\n"
STEP: verifying the job e2e-test-nginx-job was created
[AfterEach] [k8s.io] Kubectl run job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1459
Aug 23 10:45:14.157: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete jobs e2e-test-nginx-job --namespace=e2e-tests-kubectl-5h8nq'
Aug 23 10:45:14.407: INFO: stderr: ""
Aug 23 10:45:14.407: INFO: stdout: "job.batch \"e2e-test-nginx-job\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 23 10:45:14.407: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-5h8nq" for this suite.
Aug 23 10:45:36.530: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 23 10:45:36.579: INFO: namespace: e2e-tests-kubectl-5h8nq, resource: bindings, ignored listing per whitelist
Aug 23 10:45:36.585: INFO: namespace e2e-tests-kubectl-5h8nq deletion completed in 22.110782107s

• [SLOW TEST:23.171 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl run job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create a job from an image when restart is OnFailure  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-storage] Downward API volume 
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 23 10:45:36.585: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Aug 23 10:45:36.856: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c7425b41-e52d-11ea-87d5-0242ac11000a" in namespace "e2e-tests-downward-api-x2stb" to be "success or failure"
Aug 23 10:45:36.886: INFO: Pod "downwardapi-volume-c7425b41-e52d-11ea-87d5-0242ac11000a": Phase="Pending", Reason="", readiness=false. Elapsed: 30.493539ms
Aug 23 10:45:38.891: INFO: Pod "downwardapi-volume-c7425b41-e52d-11ea-87d5-0242ac11000a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034901889s
Aug 23 10:45:40.894: INFO: Pod "downwardapi-volume-c7425b41-e52d-11ea-87d5-0242ac11000a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.037890558s
STEP: Saw pod success
Aug 23 10:45:40.894: INFO: Pod "downwardapi-volume-c7425b41-e52d-11ea-87d5-0242ac11000a" satisfied condition "success or failure"
Aug 23 10:45:40.896: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-c7425b41-e52d-11ea-87d5-0242ac11000a container client-container: 
STEP: delete the pod
Aug 23 10:45:41.060: INFO: Waiting for pod downwardapi-volume-c7425b41-e52d-11ea-87d5-0242ac11000a to disappear
Aug 23 10:45:41.281: INFO: Pod downwardapi-volume-c7425b41-e52d-11ea-87d5-0242ac11000a no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 23 10:45:41.281: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-x2stb" for this suite.
Aug 23 10:45:47.296: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 23 10:45:47.449: INFO: namespace: e2e-tests-downward-api-x2stb, resource: bindings, ignored listing per whitelist
Aug 23 10:45:47.481: INFO: namespace e2e-tests-downward-api-x2stb deletion completed in 6.195779933s

• [SLOW TEST:10.896 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Proxy server 
  should support --unix-socket=/path  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 23 10:45:47.481: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should support --unix-socket=/path  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Starting the proxy
Aug 23 10:45:47.649: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix726279098/test'
STEP: retrieving proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 23 10:45:47.808: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-scnwb" for this suite.
Aug 23 10:45:53.921: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 23 10:45:53.933: INFO: namespace: e2e-tests-kubectl-scnwb, resource: bindings, ignored listing per whitelist
Aug 23 10:45:53.987: INFO: namespace e2e-tests-kubectl-scnwb deletion completed in 6.101695672s

• [SLOW TEST:6.506 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Proxy server
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should support --unix-socket=/path  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[k8s.io] Probing container 
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 23 10:45:53.988: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-hp24c
Aug 23 10:45:58.102: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-hp24c
STEP: checking the pod's current state and verifying that restartCount is present
Aug 23 10:45:58.104: INFO: Initial restart count of pod liveness-http is 0
Aug 23 10:46:24.567: INFO: Restart count of pod e2e-tests-container-probe-hp24c/liveness-http is now 1 (26.463057034s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 23 10:46:24.833: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-hp24c" for this suite.
Aug 23 10:46:33.949: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 23 10:46:33.965: INFO: namespace: e2e-tests-container-probe-hp24c, resource: bindings, ignored listing per whitelist
Aug 23 10:46:34.000: INFO: namespace e2e-tests-container-probe-hp24c deletion completed in 8.753511004s

• [SLOW TEST:40.013 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-network] Service endpoints latency 
  should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Service endpoints latency
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 23 10:46:34.000: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svc-latency
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating replication controller svc-latency-rc in namespace e2e-tests-svc-latency-27jxp
I0823 10:46:34.848592       6 runners.go:184] Created replication controller with name: svc-latency-rc, namespace: e2e-tests-svc-latency-27jxp, replica count: 1
I0823 10:46:35.898944       6 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0823 10:46:36.899111       6 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0823 10:46:37.899279       6 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0823 10:46:38.899437       6 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Aug 23 10:46:39.806: INFO: Created: latency-svc-d2sck
Aug 23 10:46:40.020: INFO: Got endpoints: latency-svc-d2sck [1.02053691s]
Aug 23 10:46:40.142: INFO: Created: latency-svc-tg2pf
Aug 23 10:46:40.152: INFO: Got endpoints: latency-svc-tg2pf [131.517342ms]
Aug 23 10:46:40.222: INFO: Created: latency-svc-mltcx
Aug 23 10:46:40.304: INFO: Got endpoints: latency-svc-mltcx [283.417174ms]
Aug 23 10:46:40.320: INFO: Created: latency-svc-lc4kx
Aug 23 10:46:40.342: INFO: Got endpoints: latency-svc-lc4kx [322.420151ms]
Aug 23 10:46:40.384: INFO: Created: latency-svc-4ft9q
Aug 23 10:46:40.447: INFO: Got endpoints: latency-svc-4ft9q [426.836301ms]
Aug 23 10:46:40.485: INFO: Created: latency-svc-2hgn4
Aug 23 10:46:40.524: INFO: Got endpoints: latency-svc-2hgn4 [504.188591ms]
Aug 23 10:46:40.729: INFO: Created: latency-svc-mwcdk
Aug 23 10:46:40.732: INFO: Got endpoints: latency-svc-mwcdk [711.905002ms]
Aug 23 10:46:40.903: INFO: Created: latency-svc-8rnw6
Aug 23 10:46:40.908: INFO: Got endpoints: latency-svc-8rnw6 [888.337636ms]
Aug 23 10:46:40.982: INFO: Created: latency-svc-ftzjs
Aug 23 10:46:41.001: INFO: Got endpoints: latency-svc-ftzjs [980.367124ms]
Aug 23 10:46:41.095: INFO: Created: latency-svc-dq4cm
Aug 23 10:46:41.237: INFO: Got endpoints: latency-svc-dq4cm [1.217098004s]
Aug 23 10:46:41.255: INFO: Created: latency-svc-vnqjw
Aug 23 10:46:41.270: INFO: Got endpoints: latency-svc-vnqjw [1.249555287s]
Aug 23 10:46:41.765: INFO: Created: latency-svc-ppz8d
Aug 23 10:46:41.821: INFO: Got endpoints: latency-svc-ppz8d [1.800495255s]
Aug 23 10:46:41.923: INFO: Created: latency-svc-8gngd
Aug 23 10:46:41.971: INFO: Got endpoints: latency-svc-8gngd [1.950892844s]
Aug 23 10:46:42.097: INFO: Created: latency-svc-2ll5b
Aug 23 10:46:42.128: INFO: Got endpoints: latency-svc-2ll5b [2.10768178s]
Aug 23 10:46:42.238: INFO: Created: latency-svc-7wskh
Aug 23 10:46:42.272: INFO: Got endpoints: latency-svc-7wskh [2.252162892s]
Aug 23 10:46:42.313: INFO: Created: latency-svc-dc627
Aug 23 10:46:42.326: INFO: Got endpoints: latency-svc-dc627 [2.305544922s]
Aug 23 10:46:42.430: INFO: Created: latency-svc-k9pm7
Aug 23 10:46:42.434: INFO: Got endpoints: latency-svc-k9pm7 [2.282239241s]
Aug 23 10:46:42.483: INFO: Created: latency-svc-pxs2c
Aug 23 10:46:42.501: INFO: Got endpoints: latency-svc-pxs2c [2.19726046s]
Aug 23 10:46:42.585: INFO: Created: latency-svc-f2b7x
Aug 23 10:46:42.587: INFO: Got endpoints: latency-svc-f2b7x [2.24465827s]
Aug 23 10:46:42.651: INFO: Created: latency-svc-r499s
Aug 23 10:46:42.669: INFO: Got endpoints: latency-svc-r499s [2.221873477s]
Aug 23 10:46:42.759: INFO: Created: latency-svc-4cnsh
Aug 23 10:46:42.762: INFO: Got endpoints: latency-svc-4cnsh [2.237374063s]
Aug 23 10:46:42.810: INFO: Created: latency-svc-k4z7f
Aug 23 10:46:42.826: INFO: Got endpoints: latency-svc-k4z7f [2.093860911s]
Aug 23 10:46:42.921: INFO: Created: latency-svc-6scwb
Aug 23 10:46:42.924: INFO: Got endpoints: latency-svc-6scwb [2.015063091s]
Aug 23 10:46:42.966: INFO: Created: latency-svc-zfvbz
Aug 23 10:46:43.018: INFO: Got endpoints: latency-svc-zfvbz [2.016909981s]
Aug 23 10:46:43.100: INFO: Created: latency-svc-fk9xx
Aug 23 10:46:43.126: INFO: Got endpoints: latency-svc-fk9xx [1.888093889s]
Aug 23 10:46:43.647: INFO: Created: latency-svc-xshq4
Aug 23 10:46:43.672: INFO: Got endpoints: latency-svc-xshq4 [2.401926594s]
Aug 23 10:46:43.721: INFO: Created: latency-svc-gdpcd
Aug 23 10:46:43.860: INFO: Got endpoints: latency-svc-gdpcd [2.03947702s]
Aug 23 10:46:43.957: INFO: Created: latency-svc-86qn7
Aug 23 10:46:44.028: INFO: Got endpoints: latency-svc-86qn7 [2.056953716s]
Aug 23 10:46:44.110: INFO: Created: latency-svc-tsdvm
Aug 23 10:46:44.178: INFO: Got endpoints: latency-svc-tsdvm [2.049968943s]
Aug 23 10:46:44.254: INFO: Created: latency-svc-x2sxp
Aug 23 10:46:44.340: INFO: Got endpoints: latency-svc-x2sxp [2.067463556s]
Aug 23 10:46:44.356: INFO: Created: latency-svc-wblt7
Aug 23 10:46:44.375: INFO: Got endpoints: latency-svc-wblt7 [2.049038076s]
Aug 23 10:46:44.409: INFO: Created: latency-svc-nqzjb
Aug 23 10:46:44.436: INFO: Got endpoints: latency-svc-nqzjb [2.001767674s]
Aug 23 10:46:44.509: INFO: Created: latency-svc-xjhl8
Aug 23 10:46:44.566: INFO: Got endpoints: latency-svc-xjhl8 [2.064954452s]
Aug 23 10:46:44.633: INFO: Created: latency-svc-gt8pj
Aug 23 10:46:44.670: INFO: Got endpoints: latency-svc-gt8pj [2.083066882s]
Aug 23 10:46:44.707: INFO: Created: latency-svc-r94rr
Aug 23 10:46:44.764: INFO: Got endpoints: latency-svc-r94rr [2.095331815s]
Aug 23 10:46:44.791: INFO: Created: latency-svc-hm2vf
Aug 23 10:46:44.803: INFO: Got endpoints: latency-svc-hm2vf [2.041270758s]
Aug 23 10:46:44.842: INFO: Created: latency-svc-5p4f2
Aug 23 10:46:44.852: INFO: Got endpoints: latency-svc-5p4f2 [2.025941114s]
Aug 23 10:46:44.922: INFO: Created: latency-svc-f569n
Aug 23 10:46:44.924: INFO: Got endpoints: latency-svc-f569n [2.000624868s]
Aug 23 10:46:44.961: INFO: Created: latency-svc-fw6dz
Aug 23 10:46:44.979: INFO: Got endpoints: latency-svc-fw6dz [1.961398306s]
Aug 23 10:46:45.006: INFO: Created: latency-svc-g4kzp
Aug 23 10:46:45.088: INFO: Got endpoints: latency-svc-g4kzp [1.962130143s]
Aug 23 10:46:45.091: INFO: Created: latency-svc-r4pd4
Aug 23 10:46:45.099: INFO: Got endpoints: latency-svc-r4pd4 [1.426998461s]
Aug 23 10:46:45.135: INFO: Created: latency-svc-8st7k
Aug 23 10:46:45.154: INFO: Got endpoints: latency-svc-8st7k [1.293561616s]
Aug 23 10:46:45.188: INFO: Created: latency-svc-k7ghd
Aug 23 10:46:45.250: INFO: Got endpoints: latency-svc-k7ghd [1.221706536s]
Aug 23 10:46:45.264: INFO: Created: latency-svc-v9l5c
Aug 23 10:46:45.282: INFO: Got endpoints: latency-svc-v9l5c [1.103490463s]
Aug 23 10:46:45.306: INFO: Created: latency-svc-44dcb
Aug 23 10:46:45.317: INFO: Got endpoints: latency-svc-44dcb [977.051981ms]
Aug 23 10:46:45.343: INFO: Created: latency-svc-56vbp
Aug 23 10:46:45.399: INFO: Got endpoints: latency-svc-56vbp [1.024276954s]
Aug 23 10:46:45.429: INFO: Created: latency-svc-h74rc
Aug 23 10:46:45.443: INFO: Got endpoints: latency-svc-h74rc [1.00775511s]
Aug 23 10:46:45.480: INFO: Created: latency-svc-bs26j
Aug 23 10:46:45.498: INFO: Got endpoints: latency-svc-bs26j [931.921628ms]
Aug 23 10:46:45.567: INFO: Created: latency-svc-g5l92
Aug 23 10:46:45.570: INFO: Got endpoints: latency-svc-g5l92 [899.545684ms]
Aug 23 10:46:45.607: INFO: Created: latency-svc-shzgq
Aug 23 10:46:45.625: INFO: Got endpoints: latency-svc-shzgq [860.144252ms]
Aug 23 10:46:45.663: INFO: Created: latency-svc-kwkfq
Aug 23 10:46:45.717: INFO: Got endpoints: latency-svc-kwkfq [913.543321ms]
Aug 23 10:46:45.745: INFO: Created: latency-svc-4pgxq
Aug 23 10:46:45.780: INFO: Got endpoints: latency-svc-4pgxq [928.102567ms]
Aug 23 10:46:45.878: INFO: Created: latency-svc-lpnwt
Aug 23 10:46:45.883: INFO: Got endpoints: latency-svc-lpnwt [958.888554ms]
Aug 23 10:46:45.916: INFO: Created: latency-svc-ll9pb
Aug 23 10:46:45.926: INFO: Got endpoints: latency-svc-ll9pb [946.496992ms]
Aug 23 10:46:45.960: INFO: Created: latency-svc-h85g7
Aug 23 10:46:46.022: INFO: Got endpoints: latency-svc-h85g7 [934.596288ms]
Aug 23 10:46:46.032: INFO: Created: latency-svc-h2m7d
Aug 23 10:46:46.053: INFO: Got endpoints: latency-svc-h2m7d [953.794216ms]
Aug 23 10:46:46.178: INFO: Created: latency-svc-2gx2k
Aug 23 10:46:46.181: INFO: Got endpoints: latency-svc-2gx2k [1.027174664s]
Aug 23 10:46:46.237: INFO: Created: latency-svc-7dsqr
Aug 23 10:46:46.365: INFO: Got endpoints: latency-svc-7dsqr [1.115155922s]
Aug 23 10:46:46.416: INFO: Created: latency-svc-2fkck
Aug 23 10:46:46.456: INFO: Got endpoints: latency-svc-2fkck [1.173760205s]
Aug 23 10:46:46.573: INFO: Created: latency-svc-945sp
Aug 23 10:46:46.602: INFO: Got endpoints: latency-svc-945sp [1.285258471s]
Aug 23 10:46:46.651: INFO: Created: latency-svc-dstgp
Aug 23 10:46:46.735: INFO: Got endpoints: latency-svc-dstgp [1.335216438s]
Aug 23 10:46:46.748: INFO: Created: latency-svc-w7pk6
Aug 23 10:46:46.779: INFO: Got endpoints: latency-svc-w7pk6 [1.335519464s]
Aug 23 10:46:46.808: INFO: Created: latency-svc-rqhrk
Aug 23 10:46:46.817: INFO: Got endpoints: latency-svc-rqhrk [1.318670101s]
Aug 23 10:46:46.873: INFO: Created: latency-svc-6pmgq
Aug 23 10:46:46.877: INFO: Got endpoints: latency-svc-6pmgq [1.306906468s]
Aug 23 10:46:46.908: INFO: Created: latency-svc-skx9x
Aug 23 10:46:46.925: INFO: Got endpoints: latency-svc-skx9x [1.300737719s]
Aug 23 10:46:46.950: INFO: Created: latency-svc-v8rds
Aug 23 10:46:46.962: INFO: Got endpoints: latency-svc-v8rds [1.245476935s]
Aug 23 10:46:47.035: INFO: Created: latency-svc-h9zjk
Aug 23 10:46:47.046: INFO: Got endpoints: latency-svc-h9zjk [1.266335543s]
Aug 23 10:46:47.073: INFO: Created: latency-svc-55mzf
Aug 23 10:46:47.089: INFO: Got endpoints: latency-svc-55mzf [1.206194666s]
Aug 23 10:46:47.124: INFO: Created: latency-svc-np687
Aug 23 10:46:47.184: INFO: Got endpoints: latency-svc-np687 [1.25787766s]
Aug 23 10:46:47.223: INFO: Created: latency-svc-2bbtd
Aug 23 10:46:47.240: INFO: Got endpoints: latency-svc-2bbtd [1.217292614s]
Aug 23 10:46:47.271: INFO: Created: latency-svc-7tlhz
Aug 23 10:46:47.358: INFO: Got endpoints: latency-svc-7tlhz [1.304919108s]
Aug 23 10:46:47.359: INFO: Created: latency-svc-gbkw4
Aug 23 10:46:47.372: INFO: Got endpoints: latency-svc-gbkw4 [1.190846041s]
Aug 23 10:46:47.421: INFO: Created: latency-svc-dwvtf
Aug 23 10:46:47.439: INFO: Got endpoints: latency-svc-dwvtf [1.073294478s]
Aug 23 10:46:47.538: INFO: Created: latency-svc-bz8th
Aug 23 10:46:47.541: INFO: Got endpoints: latency-svc-bz8th [1.085801031s]
Aug 23 10:46:47.599: INFO: Created: latency-svc-m777c
Aug 23 10:46:47.614: INFO: Got endpoints: latency-svc-m777c [1.011881106s]
Aug 23 10:46:47.723: INFO: Created: latency-svc-jqpnj
Aug 23 10:46:47.727: INFO: Got endpoints: latency-svc-jqpnj [992.444045ms]
Aug 23 10:46:48.480: INFO: Created: latency-svc-kfhft
Aug 23 10:46:48.628: INFO: Got endpoints: latency-svc-kfhft [1.84899105s]
Aug 23 10:46:48.824: INFO: Created: latency-svc-tdqzz
Aug 23 10:46:48.843: INFO: Got endpoints: latency-svc-tdqzz [2.026021908s]
Aug 23 10:46:48.951: INFO: Created: latency-svc-zwdm8
Aug 23 10:46:48.963: INFO: Got endpoints: latency-svc-zwdm8 [2.086046183s]
Aug 23 10:46:49.009: INFO: Created: latency-svc-8kllt
Aug 23 10:46:49.017: INFO: Got endpoints: latency-svc-8kllt [2.091629824s]
Aug 23 10:46:49.094: INFO: Created: latency-svc-mk4d7
Aug 23 10:46:49.114: INFO: Got endpoints: latency-svc-mk4d7 [2.151581796s]
Aug 23 10:46:49.159: INFO: Created: latency-svc-gjlms
Aug 23 10:46:49.162: INFO: Got endpoints: latency-svc-gjlms [2.115372354s]
Aug 23 10:46:49.266: INFO: Created: latency-svc-2wt52
Aug 23 10:46:49.294: INFO: Got endpoints: latency-svc-2wt52 [2.204951492s]
Aug 23 10:46:49.354: INFO: Created: latency-svc-mp8gl
Aug 23 10:46:49.406: INFO: Got endpoints: latency-svc-mp8gl [2.221888241s]
Aug 23 10:46:49.425: INFO: Created: latency-svc-r4qqm
Aug 23 10:46:49.445: INFO: Got endpoints: latency-svc-r4qqm [2.205526252s]
Aug 23 10:46:49.482: INFO: Created: latency-svc-gccwk
Aug 23 10:46:49.499: INFO: Got endpoints: latency-svc-gccwk [2.141395508s]
Aug 23 10:46:49.593: INFO: Created: latency-svc-vvrkw
Aug 23 10:46:49.621: INFO: Got endpoints: latency-svc-vvrkw [2.248398509s]
Aug 23 10:46:49.723: INFO: Created: latency-svc-dk7pg
Aug 23 10:46:49.726: INFO: Got endpoints: latency-svc-dk7pg [2.286953215s]
Aug 23 10:46:49.806: INFO: Created: latency-svc-kcrp4
Aug 23 10:46:49.920: INFO: Got endpoints: latency-svc-kcrp4 [2.378709914s]
Aug 23 10:46:49.976: INFO: Created: latency-svc-l4lch
Aug 23 10:46:50.010: INFO: Got endpoints: latency-svc-l4lch [2.396016663s]
Aug 23 10:46:50.077: INFO: Created: latency-svc-bskbr
Aug 23 10:46:50.115: INFO: Created: latency-svc-wdwvf
Aug 23 10:46:50.131: INFO: Got endpoints: latency-svc-wdwvf [1.503086321s]
Aug 23 10:46:50.132: INFO: Got endpoints: latency-svc-bskbr [2.40425177s]
Aug 23 10:46:50.245: INFO: Created: latency-svc-rblqr
Aug 23 10:46:50.265: INFO: Got endpoints: latency-svc-rblqr [1.42182474s]
Aug 23 10:46:50.294: INFO: Created: latency-svc-gsxhl
Aug 23 10:46:50.312: INFO: Got endpoints: latency-svc-gsxhl [1.349010543s]
Aug 23 10:46:50.371: INFO: Created: latency-svc-bbrpd
Aug 23 10:46:50.374: INFO: Got endpoints: latency-svc-bbrpd [1.356744453s]
Aug 23 10:46:50.415: INFO: Created: latency-svc-z7p2n
Aug 23 10:46:50.438: INFO: Got endpoints: latency-svc-z7p2n [1.324323116s]
Aug 23 10:46:50.469: INFO: Created: latency-svc-jd4xw
Aug 23 10:46:50.519: INFO: Got endpoints: latency-svc-jd4xw [1.357341306s]
Aug 23 10:46:50.551: INFO: Created: latency-svc-477m6
Aug 23 10:46:50.565: INFO: Got endpoints: latency-svc-477m6 [1.270767515s]
Aug 23 10:46:50.592: INFO: Created: latency-svc-g7pvz
Aug 23 10:46:50.602: INFO: Got endpoints: latency-svc-g7pvz [1.196229054s]
Aug 23 10:46:50.669: INFO: Created: latency-svc-95l6h
Aug 23 10:46:50.686: INFO: Got endpoints: latency-svc-95l6h [1.240692027s]
Aug 23 10:46:50.809: INFO: Created: latency-svc-kqrqp
Aug 23 10:46:50.837: INFO: Got endpoints: latency-svc-kqrqp [1.337196355s]
Aug 23 10:46:50.900: INFO: Created: latency-svc-8hvdm
Aug 23 10:46:50.988: INFO: Got endpoints: latency-svc-8hvdm [1.367173044s]
Aug 23 10:46:50.990: INFO: Created: latency-svc-gzw65
Aug 23 10:46:50.999: INFO: Got endpoints: latency-svc-gzw65 [1.272965247s]
Aug 23 10:46:51.030: INFO: Created: latency-svc-hnj9f
Aug 23 10:46:51.048: INFO: Got endpoints: latency-svc-hnj9f [1.127416957s]
Aug 23 10:46:51.086: INFO: Created: latency-svc-rjwvp
Aug 23 10:46:51.130: INFO: Got endpoints: latency-svc-rjwvp [1.11986432s]
Aug 23 10:46:51.171: INFO: Created: latency-svc-t8h2z
Aug 23 10:46:51.186: INFO: Got endpoints: latency-svc-t8h2z [1.054329027s]
Aug 23 10:46:51.213: INFO: Created: latency-svc-r6qdv
Aug 23 10:46:51.316: INFO: Got endpoints: latency-svc-r6qdv [1.18444035s]
Aug 23 10:46:51.366: INFO: Created: latency-svc-65n9m
Aug 23 10:46:51.472: INFO: Got endpoints: latency-svc-65n9m [1.206842057s]
Aug 23 10:46:51.475: INFO: Created: latency-svc-qc2p5
Aug 23 10:46:51.484: INFO: Got endpoints: latency-svc-qc2p5 [1.172190084s]
Aug 23 10:46:51.504: INFO: Created: latency-svc-5njkv
Aug 23 10:46:51.517: INFO: Got endpoints: latency-svc-5njkv [1.143071172s]
Aug 23 10:46:51.535: INFO: Created: latency-svc-8xzdd
Aug 23 10:46:51.547: INFO: Got endpoints: latency-svc-8xzdd [1.108952981s]
Aug 23 10:46:51.567: INFO: Created: latency-svc-sxc82
Aug 23 10:46:51.616: INFO: Got endpoints: latency-svc-sxc82 [1.097094026s]
Aug 23 10:46:51.633: INFO: Created: latency-svc-qjbzz
Aug 23 10:46:51.656: INFO: Got endpoints: latency-svc-qjbzz [1.090299323s]
Aug 23 10:46:51.690: INFO: Created: latency-svc-4nqvm
Aug 23 10:46:51.710: INFO: Got endpoints: latency-svc-4nqvm [1.108136173s]
Aug 23 10:46:51.856: INFO: Created: latency-svc-csnwr
Aug 23 10:46:51.858: INFO: Got endpoints: latency-svc-csnwr [1.171754883s]
Aug 23 10:46:51.909: INFO: Created: latency-svc-x5fgw
Aug 23 10:46:51.921: INFO: Got endpoints: latency-svc-x5fgw [1.083873099s]
Aug 23 10:46:52.011: INFO: Created: latency-svc-5w22h
Aug 23 10:46:52.022: INFO: Got endpoints: latency-svc-5w22h [1.034500727s]
Aug 23 10:46:52.061: INFO: Created: latency-svc-cqctb
Aug 23 10:46:52.071: INFO: Got endpoints: latency-svc-cqctb [1.072372248s]
Aug 23 10:46:52.166: INFO: Created: latency-svc-pmnxp
Aug 23 10:46:52.170: INFO: Got endpoints: latency-svc-pmnxp [1.12212353s]
Aug 23 10:46:52.239: INFO: Created: latency-svc-nx4f6
Aug 23 10:46:52.251: INFO: Got endpoints: latency-svc-nx4f6 [1.121148445s]
Aug 23 10:46:52.311: INFO: Created: latency-svc-tm2kv
Aug 23 10:46:52.330: INFO: Got endpoints: latency-svc-tm2kv [1.143541079s]
Aug 23 10:46:52.358: INFO: Created: latency-svc-kqd8t
Aug 23 10:46:52.378: INFO: Got endpoints: latency-svc-kqd8t [1.062066426s]
Aug 23 10:46:52.396: INFO: Created: latency-svc-xnrhb
Aug 23 10:46:52.471: INFO: Got endpoints: latency-svc-xnrhb [999.8823ms]
Aug 23 10:46:52.480: INFO: Created: latency-svc-d7876
Aug 23 10:46:52.485: INFO: Got endpoints: latency-svc-d7876 [1.000439569s]
Aug 23 10:46:52.507: INFO: Created: latency-svc-njq4m
Aug 23 10:46:52.521: INFO: Got endpoints: latency-svc-njq4m [1.004135258s]
Aug 23 10:46:52.555: INFO: Created: latency-svc-z28zj
Aug 23 10:46:52.605: INFO: Got endpoints: latency-svc-z28zj [1.057769104s]
Aug 23 10:46:52.654: INFO: Created: latency-svc-fl77q
Aug 23 10:46:52.783: INFO: Got endpoints: latency-svc-fl77q [1.166000387s]
Aug 23 10:46:52.791: INFO: Created: latency-svc-hkg5c
Aug 23 10:46:52.803: INFO: Got endpoints: latency-svc-hkg5c [1.147725244s]
Aug 23 10:46:52.821: INFO: Created: latency-svc-n95cv
Aug 23 10:46:52.834: INFO: Got endpoints: latency-svc-n95cv [1.123734089s]
Aug 23 10:46:52.851: INFO: Created: latency-svc-jh6vn
Aug 23 10:46:52.870: INFO: Got endpoints: latency-svc-jh6vn [1.012321241s]
Aug 23 10:46:52.943: INFO: Created: latency-svc-gkrrr
Aug 23 10:46:52.945: INFO: Got endpoints: latency-svc-gkrrr [1.023928992s]
Aug 23 10:46:52.995: INFO: Created: latency-svc-wfcn8
Aug 23 10:46:53.078: INFO: Got endpoints: latency-svc-wfcn8 [1.055778172s]
Aug 23 10:46:53.103: INFO: Created: latency-svc-bgwxz
Aug 23 10:46:53.117: INFO: Got endpoints: latency-svc-bgwxz [1.045820347s]
Aug 23 10:46:53.146: INFO: Created: latency-svc-92lr7
Aug 23 10:46:53.159: INFO: Got endpoints: latency-svc-92lr7 [989.127162ms]
Aug 23 10:46:53.268: INFO: Created: latency-svc-tv2bc
Aug 23 10:46:53.271: INFO: Got endpoints: latency-svc-tv2bc [1.020076151s]
Aug 23 10:46:53.302: INFO: Created: latency-svc-qpmt4
Aug 23 10:46:53.325: INFO: Got endpoints: latency-svc-qpmt4 [995.577476ms]
Aug 23 10:46:53.350: INFO: Created: latency-svc-mfsrg
Aug 23 10:46:53.358: INFO: Got endpoints: latency-svc-mfsrg [980.287222ms]
Aug 23 10:46:53.418: INFO: Created: latency-svc-pv5dt
Aug 23 10:46:53.420: INFO: Got endpoints: latency-svc-pv5dt [948.556801ms]
Aug 23 10:46:53.456: INFO: Created: latency-svc-4nhfm
Aug 23 10:46:53.478: INFO: Got endpoints: latency-svc-4nhfm [993.665597ms]
Aug 23 10:46:53.507: INFO: Created: latency-svc-tgdf4
Aug 23 10:46:53.579: INFO: Got endpoints: latency-svc-tgdf4 [1.057857504s]
Aug 23 10:46:53.581: INFO: Created: latency-svc-s8zhp
Aug 23 10:46:53.599: INFO: Got endpoints: latency-svc-s8zhp [993.856396ms]
Aug 23 10:46:53.626: INFO: Created: latency-svc-jzw6z
Aug 23 10:46:53.641: INFO: Got endpoints: latency-svc-jzw6z [858.889198ms]
Aug 23 10:46:53.659: INFO: Created: latency-svc-jv6dr
Aug 23 10:46:53.673: INFO: Got endpoints: latency-svc-jv6dr [869.760682ms]
Aug 23 10:46:53.712: INFO: Created: latency-svc-mlwwt
Aug 23 10:46:53.714: INFO: Got endpoints: latency-svc-mlwwt [880.099442ms]
Aug 23 10:46:54.520: INFO: Created: latency-svc-zqztx
Aug 23 10:46:54.524: INFO: Got endpoints: latency-svc-zqztx [1.654073177s]
Aug 23 10:46:54.603: INFO: Created: latency-svc-796mh
Aug 23 10:46:54.663: INFO: Got endpoints: latency-svc-796mh [1.718087591s]
Aug 23 10:46:54.676: INFO: Created: latency-svc-gb9lm
Aug 23 10:46:54.685: INFO: Got endpoints: latency-svc-gb9lm [1.607095424s]
Aug 23 10:46:54.725: INFO: Created: latency-svc-fdm92
Aug 23 10:46:54.885: INFO: Got endpoints: latency-svc-fdm92 [1.767613743s]
Aug 23 10:46:55.116: INFO: Created: latency-svc-686qz
Aug 23 10:46:55.121: INFO: Got endpoints: latency-svc-686qz [1.961615669s]
Aug 23 10:46:55.282: INFO: Created: latency-svc-cw2x4
Aug 23 10:46:55.283: INFO: Got endpoints: latency-svc-cw2x4 [398.607774ms]
Aug 23 10:46:55.361: INFO: Created: latency-svc-rjbzv
Aug 23 10:46:55.507: INFO: Got endpoints: latency-svc-rjbzv [2.23607541s]
Aug 23 10:46:55.542: INFO: Created: latency-svc-shgjs
Aug 23 10:46:55.585: INFO: Got endpoints: latency-svc-shgjs [2.25997096s]
Aug 23 10:46:55.663: INFO: Created: latency-svc-r4hxz
Aug 23 10:46:55.666: INFO: Got endpoints: latency-svc-r4hxz [2.307701539s]
Aug 23 10:46:55.705: INFO: Created: latency-svc-tnb74
Aug 23 10:46:55.755: INFO: Got endpoints: latency-svc-tnb74 [2.334900414s]
Aug 23 10:46:55.844: INFO: Created: latency-svc-rln59
Aug 23 10:46:55.845: INFO: Got endpoints: latency-svc-rln59 [2.367043773s]
Aug 23 10:46:55.872: INFO: Created: latency-svc-vj6w5
Aug 23 10:46:55.885: INFO: Got endpoints: latency-svc-vj6w5 [2.30597307s]
Aug 23 10:46:55.923: INFO: Created: latency-svc-fq5b5
Aug 23 10:46:55.998: INFO: Got endpoints: latency-svc-fq5b5 [2.399050267s]
Aug 23 10:46:56.009: INFO: Created: latency-svc-49drs
Aug 23 10:46:56.012: INFO: Got endpoints: latency-svc-49drs [2.370641257s]
Aug 23 10:46:56.040: INFO: Created: latency-svc-8jcw2
Aug 23 10:46:56.049: INFO: Got endpoints: latency-svc-8jcw2 [2.375626s]
Aug 23 10:46:56.083: INFO: Created: latency-svc-mwmh7
Aug 23 10:46:56.091: INFO: Got endpoints: latency-svc-mwmh7 [2.377149252s]
Aug 23 10:46:56.166: INFO: Created: latency-svc-96cn2
Aug 23 10:46:56.189: INFO: Got endpoints: latency-svc-96cn2 [1.664116148s]
Aug 23 10:46:56.259: INFO: Created: latency-svc-8gqj5
Aug 23 10:46:56.334: INFO: Got endpoints: latency-svc-8gqj5 [1.671456768s]
Aug 23 10:46:56.344: INFO: Created: latency-svc-dz76x
Aug 23 10:46:56.350: INFO: Got endpoints: latency-svc-dz76x [1.665100854s]
Aug 23 10:46:56.387: INFO: Created: latency-svc-67slr
Aug 23 10:46:56.387: INFO: Got endpoints: latency-svc-67slr [1.266058096s]
Aug 23 10:46:56.415: INFO: Created: latency-svc-qfts9
Aug 23 10:46:56.489: INFO: Got endpoints: latency-svc-qfts9 [1.205715283s]
Aug 23 10:46:56.521: INFO: Created: latency-svc-k84gp
Aug 23 10:46:56.538: INFO: Got endpoints: latency-svc-k84gp [1.030104577s]
Aug 23 10:46:56.557: INFO: Created: latency-svc-vmqdw
Aug 23 10:46:56.567: INFO: Got endpoints: latency-svc-vmqdw [981.770506ms]
Aug 23 10:46:56.677: INFO: Created: latency-svc-8rxrt
Aug 23 10:46:56.689: INFO: Got endpoints: latency-svc-8rxrt [1.022583671s]
Aug 23 10:46:56.714: INFO: Created: latency-svc-b2k7f
Aug 23 10:46:56.754: INFO: Got endpoints: latency-svc-b2k7f [999.056314ms]
Aug 23 10:46:56.838: INFO: Created: latency-svc-fdktx
Aug 23 10:46:56.840: INFO: Got endpoints: latency-svc-fdktx [994.530697ms]
Aug 23 10:46:57.113: INFO: Created: latency-svc-955zf
Aug 23 10:46:57.146: INFO: Got endpoints: latency-svc-955zf [1.260551717s]
Aug 23 10:46:57.292: INFO: Created: latency-svc-486fq
Aug 23 10:46:57.326: INFO: Got endpoints: latency-svc-486fq [1.327616383s]
Aug 23 10:46:57.372: INFO: Created: latency-svc-w4qs6
Aug 23 10:46:57.513: INFO: Got endpoints: latency-svc-w4qs6 [1.501055984s]
Aug 23 10:46:57.543: INFO: Created: latency-svc-2zhq2
Aug 23 10:46:57.699: INFO: Got endpoints: latency-svc-2zhq2 [1.650281376s]
Aug 23 10:46:57.715: INFO: Created: latency-svc-dfp44
Aug 23 10:46:57.744: INFO: Got endpoints: latency-svc-dfp44 [1.652632759s]
Aug 23 10:46:57.778: INFO: Created: latency-svc-qcc4w
Aug 23 10:46:57.902: INFO: Got endpoints: latency-svc-qcc4w [1.713644368s]
Aug 23 10:46:57.904: INFO: Created: latency-svc-f2vjp
Aug 23 10:46:57.956: INFO: Got endpoints: latency-svc-f2vjp [1.621485464s]
Aug 23 10:46:58.057: INFO: Created: latency-svc-w62zn
Aug 23 10:46:58.064: INFO: Got endpoints: latency-svc-w62zn [1.713491831s]
Aug 23 10:46:58.102: INFO: Created: latency-svc-chs7d
Aug 23 10:46:58.112: INFO: Got endpoints: latency-svc-chs7d [1.725290025s]
Aug 23 10:46:58.136: INFO: Created: latency-svc-fsz44
Aug 23 10:46:58.149: INFO: Got endpoints: latency-svc-fsz44 [1.659932547s]
Aug 23 10:46:58.198: INFO: Created: latency-svc-7wsjm
Aug 23 10:46:58.202: INFO: Got endpoints: latency-svc-7wsjm [1.664085282s]
Aug 23 10:46:58.258: INFO: Created: latency-svc-bbl2r
Aug 23 10:46:58.289: INFO: Got endpoints: latency-svc-bbl2r [1.721735722s]
Aug 23 10:46:58.382: INFO: Created: latency-svc-rz4w5
Aug 23 10:46:58.427: INFO: Got endpoints: latency-svc-rz4w5 [1.73800528s]
Aug 23 10:46:58.640: INFO: Created: latency-svc-k7rx9
Aug 23 10:46:58.648: INFO: Got endpoints: latency-svc-k7rx9 [1.893585263s]
Aug 23 10:46:58.855: INFO: Created: latency-svc-7m69g
Aug 23 10:46:58.861: INFO: Got endpoints: latency-svc-7m69g [2.020887598s]
Aug 23 10:46:59.028: INFO: Created: latency-svc-2fnw8
Aug 23 10:46:59.074: INFO: Got endpoints: latency-svc-2fnw8 [1.928574538s]
Aug 23 10:46:59.347: INFO: Created: latency-svc-8q4xf
Aug 23 10:46:59.513: INFO: Got endpoints: latency-svc-8q4xf [2.187067798s]
Aug 23 10:46:59.600: INFO: Created: latency-svc-wj7ns
Aug 23 10:46:59.765: INFO: Got endpoints: latency-svc-wj7ns [2.251344168s]
Aug 23 10:47:00.923: INFO: Created: latency-svc-9kqxx
Aug 23 10:47:01.849: INFO: Created: latency-svc-ssdcf
Aug 23 10:47:01.849: INFO: Got endpoints: latency-svc-9kqxx [4.150070512s]
Aug 23 10:47:02.143: INFO: Got endpoints: latency-svc-ssdcf [4.399426604s]
Aug 23 10:47:02.144: INFO: Created: latency-svc-dkf95
Aug 23 10:47:02.150: INFO: Got endpoints: latency-svc-dkf95 [4.24759274s]
Aug 23 10:47:02.525: INFO: Created: latency-svc-q5flm
Aug 23 10:47:02.651: INFO: Got endpoints: latency-svc-q5flm [4.695181869s]
Aug 23 10:47:02.867: INFO: Created: latency-svc-j47r9
Aug 23 10:47:02.897: INFO: Got endpoints: latency-svc-j47r9 [4.833318612s]
Aug 23 10:47:03.830: INFO: Created: latency-svc-zhc88
Aug 23 10:47:03.911: INFO: Got endpoints: latency-svc-zhc88 [5.79848908s]
Aug 23 10:47:04.250: INFO: Created: latency-svc-pxh84
Aug 23 10:47:04.412: INFO: Got endpoints: latency-svc-pxh84 [6.262621478s]
Aug 23 10:47:04.693: INFO: Created: latency-svc-kj82m
Aug 23 10:47:04.719: INFO: Got endpoints: latency-svc-kj82m [6.516860944s]
Aug 23 10:47:04.862: INFO: Created: latency-svc-f86sz
Aug 23 10:47:04.875: INFO: Got endpoints: latency-svc-f86sz [6.586319577s]
Aug 23 10:47:04.931: INFO: Created: latency-svc-8f78z
Aug 23 10:47:04.956: INFO: Got endpoints: latency-svc-8f78z [6.529022492s]
Aug 23 10:47:05.052: INFO: Created: latency-svc-s2l5m
Aug 23 10:47:05.074: INFO: Got endpoints: latency-svc-s2l5m [6.426262676s]
Aug 23 10:47:05.115: INFO: Created: latency-svc-hdcq5
Aug 23 10:47:05.134: INFO: Got endpoints: latency-svc-hdcq5 [6.273296187s]
Aug 23 10:47:05.244: INFO: Created: latency-svc-2df8g
Aug 23 10:47:05.261: INFO: Got endpoints: latency-svc-2df8g [6.186061781s]
Aug 23 10:47:05.261: INFO: Latencies: [131.517342ms 283.417174ms 322.420151ms 398.607774ms 426.836301ms 504.188591ms 711.905002ms 858.889198ms 860.144252ms 869.760682ms 880.099442ms 888.337636ms 899.545684ms 913.543321ms 928.102567ms 931.921628ms 934.596288ms 946.496992ms 948.556801ms 953.794216ms 958.888554ms 977.051981ms 980.287222ms 980.367124ms 981.770506ms 989.127162ms 992.444045ms 993.665597ms 993.856396ms 994.530697ms 995.577476ms 999.056314ms 999.8823ms 1.000439569s 1.004135258s 1.00775511s 1.011881106s 1.012321241s 1.020076151s 1.022583671s 1.023928992s 1.024276954s 1.027174664s 1.030104577s 1.034500727s 1.045820347s 1.054329027s 1.055778172s 1.057769104s 1.057857504s 1.062066426s 1.072372248s 1.073294478s 1.083873099s 1.085801031s 1.090299323s 1.097094026s 1.103490463s 1.108136173s 1.108952981s 1.115155922s 1.11986432s 1.121148445s 1.12212353s 1.123734089s 1.127416957s 1.143071172s 1.143541079s 1.147725244s 1.166000387s 1.171754883s 1.172190084s 1.173760205s 1.18444035s 1.190846041s 1.196229054s 1.205715283s 1.206194666s 1.206842057s 1.217098004s 1.217292614s 1.221706536s 1.240692027s 1.245476935s 1.249555287s 1.25787766s 1.260551717s 1.266058096s 1.266335543s 1.270767515s 1.272965247s 1.285258471s 1.293561616s 1.300737719s 1.304919108s 1.306906468s 1.318670101s 1.324323116s 1.327616383s 1.335216438s 1.335519464s 1.337196355s 1.349010543s 1.356744453s 1.357341306s 1.367173044s 1.42182474s 1.426998461s 1.501055984s 1.503086321s 1.607095424s 1.621485464s 1.650281376s 1.652632759s 1.654073177s 1.659932547s 1.664085282s 1.664116148s 1.665100854s 1.671456768s 1.713491831s 1.713644368s 1.718087591s 1.721735722s 1.725290025s 1.73800528s 1.767613743s 1.800495255s 1.84899105s 1.888093889s 1.893585263s 1.928574538s 1.950892844s 1.961398306s 1.961615669s 1.962130143s 2.000624868s 2.001767674s 2.015063091s 2.016909981s 2.020887598s 2.025941114s 2.026021908s 2.03947702s 2.041270758s 2.049038076s 2.049968943s 2.056953716s 2.064954452s 2.067463556s 2.083066882s 2.086046183s 2.091629824s 2.093860911s 2.095331815s 2.10768178s 2.115372354s 2.141395508s 2.151581796s 2.187067798s 2.19726046s 2.204951492s 2.205526252s 2.221873477s 2.221888241s 2.23607541s 2.237374063s 2.24465827s 2.248398509s 2.251344168s 2.252162892s 2.25997096s 2.282239241s 2.286953215s 2.305544922s 2.30597307s 2.307701539s 2.334900414s 2.367043773s 2.370641257s 2.375626s 2.377149252s 2.378709914s 2.396016663s 2.399050267s 2.401926594s 2.40425177s 4.150070512s 4.24759274s 4.399426604s 4.695181869s 4.833318612s 5.79848908s 6.186061781s 6.262621478s 6.273296187s 6.426262676s 6.516860944s 6.529022492s 6.586319577s]
Aug 23 10:47:05.261: INFO: 50 %ile: 1.335519464s
Aug 23 10:47:05.261: INFO: 90 %ile: 2.375626s
Aug 23 10:47:05.261: INFO: 99 %ile: 6.529022492s
Aug 23 10:47:05.261: INFO: Total sample count: 200
[AfterEach] [sig-network] Service endpoints latency
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 23 10:47:05.261: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-svc-latency-27jxp" for this suite.
Aug 23 10:47:41.317: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 23 10:47:41.342: INFO: namespace: e2e-tests-svc-latency-27jxp, resource: bindings, ignored listing per whitelist
Aug 23 10:47:41.388: INFO: namespace e2e-tests-svc-latency-27jxp deletion completed in 36.108802542s

• [SLOW TEST:67.388 seconds]
[sig-network] Service endpoints latency
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 23 10:47:41.389: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 23 10:48:41.550: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-6m9hl" for this suite.
Aug 23 10:49:06.008: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 23 10:49:06.062: INFO: namespace: e2e-tests-container-probe-6m9hl, resource: bindings, ignored listing per whitelist
Aug 23 10:49:06.083: INFO: namespace e2e-tests-container-probe-6m9hl deletion completed in 24.529314741s

• [SLOW TEST:84.694 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-auth] ServiceAccounts 
  should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 23 10:49:06.083: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: getting the auto-created API token
Aug 23 10:49:06.862: INFO: created pod pod-service-account-defaultsa
Aug 23 10:49:06.862: INFO: pod pod-service-account-defaultsa service account token volume mount: true
Aug 23 10:49:06.915: INFO: created pod pod-service-account-mountsa
Aug 23 10:49:06.915: INFO: pod pod-service-account-mountsa service account token volume mount: true
Aug 23 10:49:06.940: INFO: created pod pod-service-account-nomountsa
Aug 23 10:49:06.940: INFO: pod pod-service-account-nomountsa service account token volume mount: false
Aug 23 10:49:06.955: INFO: created pod pod-service-account-defaultsa-mountspec
Aug 23 10:49:06.955: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true
Aug 23 10:49:06.990: INFO: created pod pod-service-account-mountsa-mountspec
Aug 23 10:49:06.990: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true
Aug 23 10:49:07.060: INFO: created pod pod-service-account-nomountsa-mountspec
Aug 23 10:49:07.060: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true
Aug 23 10:49:07.070: INFO: created pod pod-service-account-defaultsa-nomountspec
Aug 23 10:49:07.071: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false
Aug 23 10:49:07.097: INFO: created pod pod-service-account-mountsa-nomountspec
Aug 23 10:49:07.097: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false
Aug 23 10:49:07.136: INFO: created pod pod-service-account-nomountsa-nomountspec
Aug 23 10:49:07.136: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false
[AfterEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 23 10:49:07.136: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-svcaccounts-wmmcb" for this suite.
Aug 23 10:49:43.263: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 23 10:49:43.310: INFO: namespace: e2e-tests-svcaccounts-wmmcb, resource: bindings, ignored listing per whitelist
Aug 23 10:49:43.336: INFO: namespace e2e-tests-svcaccounts-wmmcb deletion completed in 36.121492686s

• [SLOW TEST:37.253 seconds]
[sig-auth] ServiceAccounts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:22
  should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-node] Downward API 
  should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 23 10:49:43.336: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward api env vars
Aug 23 10:49:43.547: INFO: Waiting up to 5m0s for pod "downward-api-5a4e2cec-e52e-11ea-87d5-0242ac11000a" in namespace "e2e-tests-downward-api-945wp" to be "success or failure"
Aug 23 10:49:43.560: INFO: Pod "downward-api-5a4e2cec-e52e-11ea-87d5-0242ac11000a": Phase="Pending", Reason="", readiness=false. Elapsed: 12.754336ms
Aug 23 10:49:45.564: INFO: Pod "downward-api-5a4e2cec-e52e-11ea-87d5-0242ac11000a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016662165s
Aug 23 10:49:47.569: INFO: Pod "downward-api-5a4e2cec-e52e-11ea-87d5-0242ac11000a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.021759742s
Aug 23 10:49:49.573: INFO: Pod "downward-api-5a4e2cec-e52e-11ea-87d5-0242ac11000a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.025454337s
STEP: Saw pod success
Aug 23 10:49:49.573: INFO: Pod "downward-api-5a4e2cec-e52e-11ea-87d5-0242ac11000a" satisfied condition "success or failure"
Aug 23 10:49:49.575: INFO: Trying to get logs from node hunter-worker2 pod downward-api-5a4e2cec-e52e-11ea-87d5-0242ac11000a container dapi-container: 
STEP: delete the pod
Aug 23 10:49:49.648: INFO: Waiting for pod downward-api-5a4e2cec-e52e-11ea-87d5-0242ac11000a to disappear
Aug 23 10:49:49.674: INFO: Pod downward-api-5a4e2cec-e52e-11ea-87d5-0242ac11000a no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 23 10:49:49.674: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-945wp" for this suite.
Aug 23 10:49:55.727: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 23 10:49:55.777: INFO: namespace: e2e-tests-downward-api-945wp, resource: bindings, ignored listing per whitelist
Aug 23 10:49:55.797: INFO: namespace e2e-tests-downward-api-945wp deletion completed in 6.119874374s

• [SLOW TEST:12.461 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38
  should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform canary updates and phased rolling updates of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 23 10:49:55.797: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74
STEP: Creating service test in namespace e2e-tests-statefulset-lxnzw
[It] should perform canary updates and phased rolling updates of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a new StaefulSet
Aug 23 10:49:55.943: INFO: Found 0 stateful pods, waiting for 3
Aug 23 10:50:06.533: INFO: Found 2 stateful pods, waiting for 3
Aug 23 10:50:16.264: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Aug 23 10:50:16.264: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Aug 23 10:50:16.264: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Aug 23 10:50:25.947: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Aug 23 10:50:25.947: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Aug 23 10:50:25.947: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Updating stateful set template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine
Aug 23 10:50:25.973: INFO: Updating stateful set ss2
STEP: Creating a new revision
STEP: Not applying an update when the partition is greater than the number of replicas
STEP: Performing a canary update
Aug 23 10:50:36.029: INFO: Updating stateful set ss2
Aug 23 10:50:36.036: INFO: Waiting for Pod e2e-tests-statefulset-lxnzw/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
STEP: Restoring Pods to the correct revision when they are deleted
Aug 23 10:50:46.911: INFO: Found 2 stateful pods, waiting for 3
Aug 23 10:50:56.916: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Aug 23 10:50:56.916: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Aug 23 10:50:56.916: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Aug 23 10:51:06.917: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Aug 23 10:51:06.917: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Aug 23 10:51:06.917: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Performing a phased rolling update
Aug 23 10:51:06.940: INFO: Updating stateful set ss2
Aug 23 10:51:07.740: INFO: Waiting for Pod e2e-tests-statefulset-lxnzw/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Aug 23 10:51:17.761: INFO: Updating stateful set ss2
Aug 23 10:51:17.980: INFO: Waiting for StatefulSet e2e-tests-statefulset-lxnzw/ss2 to complete update
Aug 23 10:51:17.980: INFO: Waiting for Pod e2e-tests-statefulset-lxnzw/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Aug 23 10:51:27.986: INFO: Waiting for StatefulSet e2e-tests-statefulset-lxnzw/ss2 to complete update
Aug 23 10:51:27.986: INFO: Waiting for Pod e2e-tests-statefulset-lxnzw/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Aug 23 10:51:37.989: INFO: Waiting for StatefulSet e2e-tests-statefulset-lxnzw/ss2 to complete update
Aug 23 10:51:37.989: INFO: Waiting for Pod e2e-tests-statefulset-lxnzw/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85
Aug 23 10:51:47.987: INFO: Deleting all statefulset in ns e2e-tests-statefulset-lxnzw
Aug 23 10:51:47.989: INFO: Scaling statefulset ss2 to 0
Aug 23 10:52:28.018: INFO: Waiting for statefulset status.replicas updated to 0
Aug 23 10:52:28.020: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 23 10:52:28.052: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-statefulset-lxnzw" for this suite.
Aug 23 10:52:38.243: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 23 10:52:38.313: INFO: namespace: e2e-tests-statefulset-lxnzw, resource: bindings, ignored listing per whitelist
Aug 23 10:52:38.315: INFO: namespace e2e-tests-statefulset-lxnzw deletion completed in 10.26011053s

• [SLOW TEST:162.518 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should perform canary updates and phased rolling updates of template modifications [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 23 10:52:38.316: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the rc
STEP: delete the rc
STEP: wait for the rc to be deleted
Aug 23 10:52:46.395: INFO: 10 pods remaining
Aug 23 10:52:46.395: INFO: 8 pods has nil DeletionTimestamp
Aug 23 10:52:46.395: INFO: 
Aug 23 10:52:47.704: INFO: 0 pods remaining
Aug 23 10:52:47.704: INFO: 0 pods has nil DeletionTimestamp
Aug 23 10:52:47.704: INFO: 
Aug 23 10:52:49.378: INFO: 0 pods remaining
Aug 23 10:52:49.378: INFO: 0 pods has nil DeletionTimestamp
Aug 23 10:52:49.379: INFO: 
STEP: Gathering metrics
W0823 10:52:50.494044       6 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Aug 23 10:52:50.494: INFO: For apiserver_request_count:
For apiserver_request_latencies_summary:
For etcd_helper_cache_entry_count:
For etcd_helper_cache_hit_count:
For etcd_helper_cache_miss_count:
For etcd_request_cache_add_latencies_summary:
For etcd_request_cache_get_latencies_summary:
For etcd_request_latencies_summary:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 23 10:52:50.494: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-gc-x6rms" for this suite.
Aug 23 10:52:59.145: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 23 10:52:59.207: INFO: namespace: e2e-tests-gc-x6rms, resource: bindings, ignored listing per whitelist
Aug 23 10:52:59.215: INFO: namespace e2e-tests-gc-x6rms deletion completed in 8.715801935s

• [SLOW TEST:20.900 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 23 10:52:59.216: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0777 on tmpfs
Aug 23 10:52:59.684: INFO: Waiting up to 5m0s for pod "pod-cf2208e6-e52e-11ea-87d5-0242ac11000a" in namespace "e2e-tests-emptydir-rvlkk" to be "success or failure"
Aug 23 10:52:59.687: INFO: Pod "pod-cf2208e6-e52e-11ea-87d5-0242ac11000a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.790852ms
Aug 23 10:53:01.690: INFO: Pod "pod-cf2208e6-e52e-11ea-87d5-0242ac11000a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005928266s
Aug 23 10:53:03.693: INFO: Pod "pod-cf2208e6-e52e-11ea-87d5-0242ac11000a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.009371249s
Aug 23 10:53:05.858: INFO: Pod "pod-cf2208e6-e52e-11ea-87d5-0242ac11000a": Phase="Running", Reason="", readiness=true. Elapsed: 6.17402015s
Aug 23 10:53:08.295: INFO: Pod "pod-cf2208e6-e52e-11ea-87d5-0242ac11000a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.610913243s
STEP: Saw pod success
Aug 23 10:53:08.295: INFO: Pod "pod-cf2208e6-e52e-11ea-87d5-0242ac11000a" satisfied condition "success or failure"
Aug 23 10:53:08.298: INFO: Trying to get logs from node hunter-worker pod pod-cf2208e6-e52e-11ea-87d5-0242ac11000a container test-container: 
STEP: delete the pod
Aug 23 10:53:09.194: INFO: Waiting for pod pod-cf2208e6-e52e-11ea-87d5-0242ac11000a to disappear
Aug 23 10:53:09.253: INFO: Pod pod-cf2208e6-e52e-11ea-87d5-0242ac11000a no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 23 10:53:09.253: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-rvlkk" for this suite.
Aug 23 10:53:15.917: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 23 10:53:15.954: INFO: namespace: e2e-tests-emptydir-rvlkk, resource: bindings, ignored listing per whitelist
Aug 23 10:53:15.981: INFO: namespace e2e-tests-emptydir-rvlkk deletion completed in 6.725536202s

• [SLOW TEST:16.765 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0777,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl describe 
  should check if kubectl describe prints relevant information for rc and pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 23 10:53:15.981: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should check if kubectl describe prints relevant information for rc and pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Aug 23 10:53:16.701: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version --client'
Aug 23 10:53:16.766: INFO: stderr: ""
Aug 23 10:53:16.766: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"13\", GitVersion:\"v1.13.12\", GitCommit:\"a8b52209ee172232b6db7a6e0ce2adc77458829f\", GitTreeState:\"clean\", BuildDate:\"2020-08-23T03:25:46Z\", GoVersion:\"go1.11.13\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n"
Aug 23 10:53:16.768: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-tt5qj'
Aug 23 10:53:22.239: INFO: stderr: ""
Aug 23 10:53:22.239: INFO: stdout: "replicationcontroller/redis-master created\n"
Aug 23 10:53:22.239: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-tt5qj'
Aug 23 10:53:22.539: INFO: stderr: ""
Aug 23 10:53:22.539: INFO: stdout: "service/redis-master created\n"
STEP: Waiting for Redis master to start.
Aug 23 10:53:23.543: INFO: Selector matched 1 pods for map[app:redis]
Aug 23 10:53:23.543: INFO: Found 0 / 1
Aug 23 10:53:24.591: INFO: Selector matched 1 pods for map[app:redis]
Aug 23 10:53:24.591: INFO: Found 0 / 1
Aug 23 10:53:25.543: INFO: Selector matched 1 pods for map[app:redis]
Aug 23 10:53:25.543: INFO: Found 0 / 1
Aug 23 10:53:26.948: INFO: Selector matched 1 pods for map[app:redis]
Aug 23 10:53:26.948: INFO: Found 1 / 1
Aug 23 10:53:26.948: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Aug 23 10:53:26.962: INFO: Selector matched 1 pods for map[app:redis]
Aug 23 10:53:26.962: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Aug 23 10:53:26.962: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe pod redis-master-rsfw9 --namespace=e2e-tests-kubectl-tt5qj'
Aug 23 10:53:27.306: INFO: stderr: ""
Aug 23 10:53:27.306: INFO: stdout: "Name:               redis-master-rsfw9\nNamespace:          e2e-tests-kubectl-tt5qj\nPriority:           0\nPriorityClassName:  \nNode:               hunter-worker2/172.18.0.8\nStart Time:         Sun, 23 Aug 2020 10:53:22 +0000\nLabels:             app=redis\n                    role=master\nAnnotations:        \nStatus:             Running\nIP:                 10.244.2.157\nControlled By:      ReplicationController/redis-master\nContainers:\n  redis-master:\n    Container ID:   containerd://a9f4b32439cd8d9635f1e70b64b4e45ebde8676d8194c71706493fcf367d6d73\n    Image:          gcr.io/kubernetes-e2e-test-images/redis:1.0\n    Image ID:       gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830\n    Port:           6379/TCP\n    Host Port:      0/TCP\n    State:          Running\n      Started:      Sun, 23 Aug 2020 10:53:25 +0000\n    Ready:          True\n    Restart Count:  0\n    Environment:    \n    Mounts:\n      /var/run/secrets/kubernetes.io/serviceaccount from default-token-bqfgh (ro)\nConditions:\n  Type              Status\n  Initialized       True \n  Ready             True \n  ContainersReady   True \n  PodScheduled      True \nVolumes:\n  default-token-bqfgh:\n    Type:        Secret (a volume populated by a Secret)\n    SecretName:  default-token-bqfgh\n    Optional:    false\nQoS Class:       BestEffort\nNode-Selectors:  \nTolerations:     node.kubernetes.io/not-ready:NoExecute for 300s\n                 node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n  Type    Reason     Age   From                     Message\n  ----    ------     ----  ----                     -------\n  Normal  Scheduled  5s    default-scheduler        Successfully assigned e2e-tests-kubectl-tt5qj/redis-master-rsfw9 to hunter-worker2\n  Normal  Pulled     4s    kubelet, hunter-worker2  Container image \"gcr.io/kubernetes-e2e-test-images/redis:1.0\" already present on machine\n  Normal  Created    2s    kubelet, hunter-worker2  Created container\n  Normal  Started    2s    kubelet, hunter-worker2  Started container\n"
Aug 23 10:53:27.306: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe rc redis-master --namespace=e2e-tests-kubectl-tt5qj'
Aug 23 10:53:27.504: INFO: stderr: ""
Aug 23 10:53:27.504: INFO: stdout: "Name:         redis-master\nNamespace:    e2e-tests-kubectl-tt5qj\nSelector:     app=redis,role=master\nLabels:       app=redis\n              role=master\nAnnotations:  \nReplicas:     1 current / 1 desired\nPods Status:  1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n  Labels:  app=redis\n           role=master\n  Containers:\n   redis-master:\n    Image:        gcr.io/kubernetes-e2e-test-images/redis:1.0\n    Port:         6379/TCP\n    Host Port:    0/TCP\n    Environment:  \n    Mounts:       \n  Volumes:        \nEvents:\n  Type    Reason            Age   From                    Message\n  ----    ------            ----  ----                    -------\n  Normal  SuccessfulCreate  5s    replication-controller  Created pod: redis-master-rsfw9\n"
Aug 23 10:53:27.505: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe service redis-master --namespace=e2e-tests-kubectl-tt5qj'
Aug 23 10:53:27.609: INFO: stderr: ""
Aug 23 10:53:27.609: INFO: stdout: "Name:              redis-master\nNamespace:         e2e-tests-kubectl-tt5qj\nLabels:            app=redis\n                   role=master\nAnnotations:       \nSelector:          app=redis,role=master\nType:              ClusterIP\nIP:                10.108.35.17\nPort:                6379/TCP\nTargetPort:        redis-server/TCP\nEndpoints:         10.244.2.157:6379\nSession Affinity:  None\nEvents:            \n"
Aug 23 10:53:27.612: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe node hunter-control-plane'
Aug 23 10:53:27.730: INFO: stderr: ""
Aug 23 10:53:27.730: INFO: stdout: "Name:               hunter-control-plane\nRoles:              master\nLabels:             beta.kubernetes.io/arch=amd64\n                    beta.kubernetes.io/os=linux\n                    kubernetes.io/hostname=hunter-control-plane\n                    node-role.kubernetes.io/master=\nAnnotations:        kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock\n                    node.alpha.kubernetes.io/ttl: 0\n                    volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp:  Sat, 15 Aug 2020 09:32:36 +0000\nTaints:             node-role.kubernetes.io/master:NoSchedule\nUnschedulable:      false\nConditions:\n  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message\n  ----             ------  -----------------                 ------------------                ------                       -------\n  MemoryPressure   False   Sun, 23 Aug 2020 10:53:25 +0000   Sat, 15 Aug 2020 09:32:36 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available\n  DiskPressure     False   Sun, 23 Aug 2020 10:53:25 +0000   Sat, 15 Aug 2020 09:32:36 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure\n  PIDPressure      False   Sun, 23 Aug 2020 10:53:25 +0000   Sat, 15 Aug 2020 09:32:36 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available\n  Ready            True    Sun, 23 Aug 2020 10:53:25 +0000   Sat, 15 Aug 2020 09:33:27 +0000   KubeletReady                 kubelet is posting ready status\nAddresses:\n  InternalIP:  172.18.0.4\n  Hostname:    hunter-control-plane\nCapacity:\n cpu:                16\n ephemeral-storage:  2303189964Ki\n hugepages-1Gi:      0\n hugepages-2Mi:      0\n memory:             131759872Ki\n pods:               110\nAllocatable:\n cpu:                16\n ephemeral-storage:  2303189964Ki\n hugepages-1Gi:      0\n hugepages-2Mi:      0\n memory:             131759872Ki\n pods:               110\nSystem Info:\n Machine ID:                 403efd4ae68744eab619e7055020cc3f\n System UUID:                dafd70bf-eb1f-4422-b415-7379320414ca\n Boot ID:                    11738d2d-5baa-4089-8e7f-2fb0329fce58\n Kernel Version:             4.15.0-109-generic\n OS Image:                   Ubuntu 19.10\n Operating System:           linux\n Architecture:               amd64\n Container Runtime Version:  containerd://1.3.3-14-g449e9269\n Kubelet Version:            v1.13.12\n Kube-Proxy Version:         v1.13.12\nPodCIDR:                     10.244.0.0/24\nNon-terminated Pods:         (9 in total)\n  Namespace                  Name                                            CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE\n  ---------                  ----                                            ------------  ----------  ---------------  -------------  ---\n  kube-system                coredns-54ff9cd656-7rfjf                        100m (0%)     0 (0%)      70Mi (0%)        170Mi (0%)     8d\n  kube-system                coredns-54ff9cd656-n4q2v                        100m (0%)     0 (0%)      70Mi (0%)        170Mi (0%)     8d\n  kube-system                etcd-hunter-control-plane                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         8d\n  kube-system                kindnet-kjrwt                                   100m (0%)     100m (0%)   50Mi (0%)        50Mi (0%)      8d\n  kube-system                kube-apiserver-hunter-control-plane             250m (1%)     0 (0%)      0 (0%)           0 (0%)         8d\n  kube-system                kube-controller-manager-hunter-control-plane    200m (1%)     0 (0%)      0 (0%)           0 (0%)         8d\n  kube-system                kube-proxy-5tp66                                0 (0%)        0 (0%)      0 (0%)           0 (0%)         8d\n  kube-system                kube-scheduler-hunter-control-plane             100m (0%)     0 (0%)      0 (0%)           0 (0%)         8d\n  local-path-storage         local-path-provisioner-674595c7-srvmc           0 (0%)        0 (0%)      0 (0%)           0 (0%)         8d\nAllocated resources:\n  (Total limits may be over 100 percent, i.e., overcommitted.)\n  Resource           Requests    Limits\n  --------           --------    ------\n  cpu                850m (5%)   100m (0%)\n  memory             190Mi (0%)  390Mi (0%)\n  ephemeral-storage  0 (0%)      0 (0%)\nEvents:              \n"
Aug 23 10:53:27.730: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe namespace e2e-tests-kubectl-tt5qj'
Aug 23 10:53:27.829: INFO: stderr: ""
Aug 23 10:53:27.829: INFO: stdout: "Name:         e2e-tests-kubectl-tt5qj\nLabels:       e2e-framework=kubectl\n              e2e-run=d7320f36-e51d-11ea-87d5-0242ac11000a\nAnnotations:  \nStatus:       Active\n\nNo resource quota.\n\nNo resource limits.\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 23 10:53:27.829: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-tt5qj" for this suite.
Aug 23 10:53:51.878: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 23 10:53:51.923: INFO: namespace: e2e-tests-kubectl-tt5qj, resource: bindings, ignored listing per whitelist
Aug 23 10:53:51.940: INFO: namespace e2e-tests-kubectl-tt5qj deletion completed in 24.107729091s

• [SLOW TEST:35.959 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl describe
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should check if kubectl describe prints relevant information for rc and pods  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 23 10:53:51.940: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-ee819d6e-e52e-11ea-87d5-0242ac11000a
STEP: Creating a pod to test consume configMaps
Aug 23 10:53:52.164: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-ee8525bb-e52e-11ea-87d5-0242ac11000a" in namespace "e2e-tests-projected-nprz8" to be "success or failure"
Aug 23 10:53:52.175: INFO: Pod "pod-projected-configmaps-ee8525bb-e52e-11ea-87d5-0242ac11000a": Phase="Pending", Reason="", readiness=false. Elapsed: 11.206353ms
Aug 23 10:53:54.283: INFO: Pod "pod-projected-configmaps-ee8525bb-e52e-11ea-87d5-0242ac11000a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.11941319s
Aug 23 10:53:57.053: INFO: Pod "pod-projected-configmaps-ee8525bb-e52e-11ea-87d5-0242ac11000a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.889091872s
Aug 23 10:53:59.061: INFO: Pod "pod-projected-configmaps-ee8525bb-e52e-11ea-87d5-0242ac11000a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.897091567s
Aug 23 10:54:02.531: INFO: Pod "pod-projected-configmaps-ee8525bb-e52e-11ea-87d5-0242ac11000a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.367198287s
STEP: Saw pod success
Aug 23 10:54:02.531: INFO: Pod "pod-projected-configmaps-ee8525bb-e52e-11ea-87d5-0242ac11000a" satisfied condition "success or failure"
Aug 23 10:54:03.017: INFO: Trying to get logs from node hunter-worker pod pod-projected-configmaps-ee8525bb-e52e-11ea-87d5-0242ac11000a container projected-configmap-volume-test: 
STEP: delete the pod
Aug 23 10:54:03.587: INFO: Waiting for pod pod-projected-configmaps-ee8525bb-e52e-11ea-87d5-0242ac11000a to disappear
Aug 23 10:54:03.597: INFO: Pod pod-projected-configmaps-ee8525bb-e52e-11ea-87d5-0242ac11000a no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 23 10:54:03.597: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-nprz8" for this suite.
Aug 23 10:54:11.684: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 23 10:54:11.742: INFO: namespace: e2e-tests-projected-nprz8, resource: bindings, ignored listing per whitelist
Aug 23 10:54:11.744: INFO: namespace e2e-tests-projected-nprz8 deletion completed in 8.123337782s

• [SLOW TEST:19.804 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[k8s.io] Pods 
  should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Aug 23 10:54:11.744: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Aug 23 10:54:12.087: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating the pod
STEP: submitting the pod to kubernetes
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Aug 23 10:54:16.183: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-8c7nw" for this suite.
Aug 23 10:55:00.214: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 23 10:55:00.256: INFO: namespace: e2e-tests-pods-8c7nw, resource: bindings, ignored listing per whitelist
Aug 23 10:55:00.279: INFO: namespace e2e-tests-pods-8c7nw deletion completed in 44.092921013s

• [SLOW TEST:48.535 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSAug 23 10:55:00.279: INFO: Running AfterSuite actions on all nodes
Aug 23 10:55:00.279: INFO: Running AfterSuite actions on node 1
Aug 23 10:55:00.279: INFO: Skipping dumping logs from cluster

Ran 200 of 2164 Specs in 7408.048 seconds
SUCCESS! -- 200 Passed | 0 Failed | 0 Pending | 1964 Skipped PASS