I0217 10:47:45.182255 8 e2e.go:224] Starting e2e run "edac7434-5172-11ea-a180-0242ac110008" on Ginkgo node 1 Running Suite: Kubernetes e2e suite =================================== Random Seed: 1581936464 - Will randomize all specs Will run 201 of 2164 specs Feb 17 10:47:45.485: INFO: >>> kubeConfig: /root/.kube/config Feb 17 10:47:45.488: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Feb 17 10:47:45.513: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Feb 17 10:47:45.560: INFO: 8 / 8 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Feb 17 10:47:45.560: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Feb 17 10:47:45.560: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Feb 17 10:47:45.568: INFO: 1 / 1 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Feb 17 10:47:45.568: INFO: 1 / 1 pods ready in namespace 'kube-system' in daemonset 'weave-net' (0 seconds elapsed) Feb 17 10:47:45.568: INFO: e2e test version: v1.13.12 Feb 17 10:47:45.568: INFO: kube-apiserver version: v1.13.8 SSSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 17 10:47:45.569: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets Feb 17 10:47:45.748: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating secret e2e-tests-secrets-d4t2n/secret-test-ee7d3dc8-5172-11ea-a180-0242ac110008 STEP: Creating a pod to test consume secrets Feb 17 10:47:45.765: INFO: Waiting up to 5m0s for pod "pod-configmaps-ee7e1b9c-5172-11ea-a180-0242ac110008" in namespace "e2e-tests-secrets-d4t2n" to be "success or failure" Feb 17 10:47:45.775: INFO: Pod "pod-configmaps-ee7e1b9c-5172-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 10.011601ms Feb 17 10:47:47.791: INFO: Pod "pod-configmaps-ee7e1b9c-5172-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025688258s Feb 17 10:47:49.804: INFO: Pod "pod-configmaps-ee7e1b9c-5172-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.038960797s Feb 17 10:47:51.836: INFO: Pod "pod-configmaps-ee7e1b9c-5172-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.071292107s Feb 17 10:47:54.452: INFO: Pod "pod-configmaps-ee7e1b9c-5172-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.687281222s Feb 17 10:47:56.476: INFO: Pod "pod-configmaps-ee7e1b9c-5172-11ea-a180-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.7111988s STEP: Saw pod success Feb 17 10:47:56.477: INFO: Pod "pod-configmaps-ee7e1b9c-5172-11ea-a180-0242ac110008" satisfied condition "success or failure" Feb 17 10:47:56.490: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-ee7e1b9c-5172-11ea-a180-0242ac110008 container env-test: STEP: delete the pod Feb 17 10:47:56.616: INFO: Waiting for pod pod-configmaps-ee7e1b9c-5172-11ea-a180-0242ac110008 to disappear Feb 17 10:47:56.749: INFO: Pod pod-configmaps-ee7e1b9c-5172-11ea-a180-0242ac110008 no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 17 10:47:56.749: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-d4t2n" for this suite. Feb 17 10:48:02.825: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 17 10:48:02.936: INFO: namespace: e2e-tests-secrets-d4t2n, resource: bindings, ignored listing per whitelist Feb 17 10:48:03.026: INFO: namespace e2e-tests-secrets-d4t2n deletion completed in 6.254687393s • [SLOW TEST:17.458 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:32 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 17 10:48:03.027: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating the pod Feb 17 10:48:13.928: INFO: Successfully updated pod "labelsupdatef8e150c8-5172-11ea-a180-0242ac110008" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 17 10:48:16.111: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-vmlrg" for this suite. Feb 17 10:48:40.197: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 17 10:48:40.267: INFO: namespace: e2e-tests-downward-api-vmlrg, resource: bindings, ignored listing per whitelist Feb 17 10:48:40.410: INFO: namespace e2e-tests-downward-api-vmlrg deletion completed in 24.253596323s • [SLOW TEST:37.383 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 17 10:48:40.410: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Feb 17 10:48:40.768: INFO: Waiting up to 5m0s for pod "downwardapi-volume-0f38b28b-5173-11ea-a180-0242ac110008" in namespace "e2e-tests-projected-779xw" to be "success or failure" Feb 17 10:48:40.783: INFO: Pod "downwardapi-volume-0f38b28b-5173-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 13.984041ms Feb 17 10:48:42.797: INFO: Pod "downwardapi-volume-0f38b28b-5173-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028161361s Feb 17 10:48:44.806: INFO: Pod "downwardapi-volume-0f38b28b-5173-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.037013617s Feb 17 10:48:46.835: INFO: Pod "downwardapi-volume-0f38b28b-5173-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.066553613s Feb 17 10:48:48.862: INFO: Pod "downwardapi-volume-0f38b28b-5173-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.093239317s Feb 17 10:48:50.876: INFO: Pod "downwardapi-volume-0f38b28b-5173-11ea-a180-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.107633688s STEP: Saw pod success Feb 17 10:48:50.876: INFO: Pod "downwardapi-volume-0f38b28b-5173-11ea-a180-0242ac110008" satisfied condition "success or failure" Feb 17 10:48:50.881: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-0f38b28b-5173-11ea-a180-0242ac110008 container client-container: STEP: delete the pod Feb 17 10:48:51.626: INFO: Waiting for pod downwardapi-volume-0f38b28b-5173-11ea-a180-0242ac110008 to disappear Feb 17 10:48:51.781: INFO: Pod downwardapi-volume-0f38b28b-5173-11ea-a180-0242ac110008 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 17 10:48:51.781: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-779xw" for this suite. Feb 17 10:48:57.879: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 17 10:48:58.029: INFO: namespace: e2e-tests-projected-779xw, resource: bindings, ignored listing per whitelist Feb 17 10:48:58.118: INFO: namespace e2e-tests-projected-779xw deletion completed in 6.308106229s • [SLOW TEST:17.708 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 17 10:48:58.119: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod liveness-exec in namespace e2e-tests-container-probe-p4lwl Feb 17 10:49:08.531: INFO: Started pod liveness-exec in namespace e2e-tests-container-probe-p4lwl STEP: checking the pod's current state and verifying that restartCount is present Feb 17 10:49:08.548: INFO: Initial restart count of pod liveness-exec is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 17 10:53:09.816: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-p4lwl" for this suite. Feb 17 10:53:20.039: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 17 10:53:20.153: INFO: namespace: e2e-tests-container-probe-p4lwl, resource: bindings, ignored listing per whitelist Feb 17 10:53:20.193: INFO: namespace e2e-tests-container-probe-p4lwl deletion completed in 10.320701569s • [SLOW TEST:262.074 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 17 10:53:20.193: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0666 on node default medium Feb 17 10:53:20.432: INFO: Waiting up to 5m0s for pod "pod-b5f786fd-5173-11ea-a180-0242ac110008" in namespace "e2e-tests-emptydir-x5mm9" to be "success or failure" Feb 17 10:53:20.438: INFO: Pod "pod-b5f786fd-5173-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.963307ms Feb 17 10:53:22.630: INFO: Pod "pod-b5f786fd-5173-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.19748969s Feb 17 10:53:24.655: INFO: Pod "pod-b5f786fd-5173-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.222078119s Feb 17 10:53:26.770: INFO: Pod "pod-b5f786fd-5173-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.337776868s Feb 17 10:53:28.799: INFO: Pod "pod-b5f786fd-5173-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.36590816s Feb 17 10:53:30.811: INFO: Pod "pod-b5f786fd-5173-11ea-a180-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.37856244s STEP: Saw pod success Feb 17 10:53:30.811: INFO: Pod "pod-b5f786fd-5173-11ea-a180-0242ac110008" satisfied condition "success or failure" Feb 17 10:53:30.815: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-b5f786fd-5173-11ea-a180-0242ac110008 container test-container: STEP: delete the pod Feb 17 10:53:31.090: INFO: Waiting for pod pod-b5f786fd-5173-11ea-a180-0242ac110008 to disappear Feb 17 10:53:31.128: INFO: Pod pod-b5f786fd-5173-11ea-a180-0242ac110008 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 17 10:53:31.128: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-x5mm9" for this suite. Feb 17 10:53:37.196: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 17 10:53:37.335: INFO: namespace: e2e-tests-emptydir-x5mm9, resource: bindings, ignored listing per whitelist Feb 17 10:53:37.370: INFO: namespace e2e-tests-emptydir-x5mm9 deletion completed in 6.229031291s • [SLOW TEST:17.177 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 17 10:53:37.371: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 17 10:54:37.684: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-lrfcm" for this suite. Feb 17 10:54:59.753: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 17 10:54:59.780: INFO: namespace: e2e-tests-container-probe-lrfcm, resource: bindings, ignored listing per whitelist Feb 17 10:54:59.892: INFO: namespace e2e-tests-container-probe-lrfcm deletion completed in 22.200472563s • [SLOW TEST:82.522 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 17 10:54:59.893: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Cleaning up the secret STEP: Cleaning up the configmap STEP: Cleaning up the pod [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 17 10:55:10.465: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-wrapper-vwqzl" for this suite. Feb 17 10:55:16.634: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 17 10:55:16.703: INFO: namespace: e2e-tests-emptydir-wrapper-vwqzl, resource: bindings, ignored listing per whitelist Feb 17 10:55:16.735: INFO: namespace e2e-tests-emptydir-wrapper-vwqzl deletion completed in 6.182835025s • [SLOW TEST:16.842 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 17 10:55:16.735: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0777 on node default medium Feb 17 10:55:16.954: INFO: Waiting up to 5m0s for pod "pod-fb6a46d0-5173-11ea-a180-0242ac110008" in namespace "e2e-tests-emptydir-p8fqh" to be "success or failure" Feb 17 10:55:16.986: INFO: Pod "pod-fb6a46d0-5173-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 31.702529ms Feb 17 10:55:19.018: INFO: Pod "pod-fb6a46d0-5173-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.063686145s Feb 17 10:55:21.036: INFO: Pod "pod-fb6a46d0-5173-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.08150496s Feb 17 10:55:23.050: INFO: Pod "pod-fb6a46d0-5173-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.095791065s Feb 17 10:55:25.469: INFO: Pod "pod-fb6a46d0-5173-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.514689264s Feb 17 10:55:27.702: INFO: Pod "pod-fb6a46d0-5173-11ea-a180-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.747634275s STEP: Saw pod success Feb 17 10:55:27.703: INFO: Pod "pod-fb6a46d0-5173-11ea-a180-0242ac110008" satisfied condition "success or failure" Feb 17 10:55:27.983: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-fb6a46d0-5173-11ea-a180-0242ac110008 container test-container: STEP: delete the pod Feb 17 10:55:28.086: INFO: Waiting for pod pod-fb6a46d0-5173-11ea-a180-0242ac110008 to disappear Feb 17 10:55:28.180: INFO: Pod pod-fb6a46d0-5173-11ea-a180-0242ac110008 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 17 10:55:28.180: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-p8fqh" for this suite. Feb 17 10:55:34.254: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 17 10:55:34.374: INFO: namespace: e2e-tests-emptydir-p8fqh, resource: bindings, ignored listing per whitelist Feb 17 10:55:34.468: INFO: namespace e2e-tests-emptydir-p8fqh deletion completed in 6.270123676s • [SLOW TEST:17.733 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 17 10:55:34.468: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-060f94e8-5174-11ea-a180-0242ac110008 STEP: Creating a pod to test consume configMaps Feb 17 10:55:34.887: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-0619c231-5174-11ea-a180-0242ac110008" in namespace "e2e-tests-projected-rlklp" to be "success or failure" Feb 17 10:55:34.911: INFO: Pod "pod-projected-configmaps-0619c231-5174-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 24.113791ms Feb 17 10:55:36.930: INFO: Pod "pod-projected-configmaps-0619c231-5174-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.042788693s Feb 17 10:55:38.948: INFO: Pod "pod-projected-configmaps-0619c231-5174-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.061211616s Feb 17 10:55:40.969: INFO: Pod "pod-projected-configmaps-0619c231-5174-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.08229452s Feb 17 10:55:42.979: INFO: Pod "pod-projected-configmaps-0619c231-5174-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.091651283s Feb 17 10:55:45.001: INFO: Pod "pod-projected-configmaps-0619c231-5174-11ea-a180-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.114227145s STEP: Saw pod success Feb 17 10:55:45.002: INFO: Pod "pod-projected-configmaps-0619c231-5174-11ea-a180-0242ac110008" satisfied condition "success or failure" Feb 17 10:55:45.014: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-0619c231-5174-11ea-a180-0242ac110008 container projected-configmap-volume-test: STEP: delete the pod Feb 17 10:55:45.570: INFO: Waiting for pod pod-projected-configmaps-0619c231-5174-11ea-a180-0242ac110008 to disappear Feb 17 10:55:45.866: INFO: Pod pod-projected-configmaps-0619c231-5174-11ea-a180-0242ac110008 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 17 10:55:45.867: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-rlklp" for this suite. Feb 17 10:55:51.925: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 17 10:55:52.060: INFO: namespace: e2e-tests-projected-rlklp, resource: bindings, ignored listing per whitelist Feb 17 10:55:52.064: INFO: namespace e2e-tests-projected-rlklp deletion completed in 6.183167634s • [SLOW TEST:17.596 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 17 10:55:52.065: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test env composition Feb 17 10:55:52.287: INFO: Waiting up to 5m0s for pod "var-expansion-10774c43-5174-11ea-a180-0242ac110008" in namespace "e2e-tests-var-expansion-5659h" to be "success or failure" Feb 17 10:55:52.491: INFO: Pod "var-expansion-10774c43-5174-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 203.695927ms Feb 17 10:55:54.675: INFO: Pod "var-expansion-10774c43-5174-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.388206113s Feb 17 10:55:56.684: INFO: Pod "var-expansion-10774c43-5174-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.397254522s Feb 17 10:55:58.885: INFO: Pod "var-expansion-10774c43-5174-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.59762644s Feb 17 10:56:00.900: INFO: Pod "var-expansion-10774c43-5174-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.612782787s Feb 17 10:56:02.927: INFO: Pod "var-expansion-10774c43-5174-11ea-a180-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.640130858s STEP: Saw pod success Feb 17 10:56:02.927: INFO: Pod "var-expansion-10774c43-5174-11ea-a180-0242ac110008" satisfied condition "success or failure" Feb 17 10:56:02.948: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod var-expansion-10774c43-5174-11ea-a180-0242ac110008 container dapi-container: STEP: delete the pod Feb 17 10:56:03.103: INFO: Waiting for pod var-expansion-10774c43-5174-11ea-a180-0242ac110008 to disappear Feb 17 10:56:03.118: INFO: Pod var-expansion-10774c43-5174-11ea-a180-0242ac110008 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 17 10:56:03.118: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-var-expansion-5659h" for this suite. Feb 17 10:56:09.160: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 17 10:56:09.187: INFO: namespace: e2e-tests-var-expansion-5659h, resource: bindings, ignored listing per whitelist Feb 17 10:56:09.353: INFO: namespace e2e-tests-var-expansion-5659h deletion completed in 6.226090264s • [SLOW TEST:17.289 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run default should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 17 10:56:09.354: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1262 [It] should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine Feb 17 10:56:09.509: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --namespace=e2e-tests-kubectl-dgnkj' Feb 17 10:56:11.288: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Feb 17 10:56:11.288: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n" STEP: verifying the pod controlled by e2e-test-nginx-deployment gets created [AfterEach] [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1268 Feb 17 10:56:13.710: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=e2e-tests-kubectl-dgnkj' Feb 17 10:56:13.990: INFO: stderr: "" Feb 17 10:56:13.990: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 17 10:56:13.990: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-dgnkj" for this suite. Feb 17 10:56:20.433: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 17 10:56:20.673: INFO: namespace: e2e-tests-kubectl-dgnkj, resource: bindings, ignored listing per whitelist Feb 17 10:56:20.797: INFO: namespace e2e-tests-kubectl-dgnkj deletion completed in 6.754350595s • [SLOW TEST:11.443 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 17 10:56:20.797: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74 STEP: Creating service test in namespace e2e-tests-statefulset-m4wk8 [It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Initializing watcher for selector baz=blah,foo=bar STEP: Creating stateful set ss in namespace e2e-tests-statefulset-m4wk8 STEP: Waiting until all stateful set ss replicas will be running in namespace e2e-tests-statefulset-m4wk8 Feb 17 10:56:21.068: INFO: Found 0 stateful pods, waiting for 1 Feb 17 10:56:31.078: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod Feb 17 10:56:31.091: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-m4wk8 ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Feb 17 10:56:31.842: INFO: stderr: "I0217 10:56:31.301401 86 log.go:172] (0xc00014c840) (0xc00065f360) Create stream\nI0217 10:56:31.301614 86 log.go:172] (0xc00014c840) (0xc00065f360) Stream added, broadcasting: 1\nI0217 10:56:31.307664 86 log.go:172] (0xc00014c840) Reply frame received for 1\nI0217 10:56:31.307702 86 log.go:172] (0xc00014c840) (0xc0007ca000) Create stream\nI0217 10:56:31.307713 86 log.go:172] (0xc00014c840) (0xc0007ca000) Stream added, broadcasting: 3\nI0217 10:56:31.309011 86 log.go:172] (0xc00014c840) Reply frame received for 3\nI0217 10:56:31.309105 86 log.go:172] (0xc00014c840) (0xc0005fe000) Create stream\nI0217 10:56:31.309139 86 log.go:172] (0xc00014c840) (0xc0005fe000) Stream added, broadcasting: 5\nI0217 10:56:31.313491 86 log.go:172] (0xc00014c840) Reply frame received for 5\nI0217 10:56:31.624949 86 log.go:172] (0xc00014c840) Data frame received for 3\nI0217 10:56:31.625004 86 log.go:172] (0xc0007ca000) (3) Data frame handling\nI0217 10:56:31.625039 86 log.go:172] (0xc0007ca000) (3) Data frame sent\nI0217 10:56:31.829395 86 log.go:172] (0xc00014c840) Data frame received for 1\nI0217 10:56:31.829444 86 log.go:172] (0xc00065f360) (1) Data frame handling\nI0217 10:56:31.829477 86 log.go:172] (0xc00065f360) (1) Data frame sent\nI0217 10:56:31.829498 86 log.go:172] (0xc00014c840) (0xc00065f360) Stream removed, broadcasting: 1\nI0217 10:56:31.829886 86 log.go:172] (0xc00014c840) (0xc0007ca000) Stream removed, broadcasting: 3\nI0217 10:56:31.830864 86 log.go:172] (0xc00014c840) (0xc0005fe000) Stream removed, broadcasting: 5\nI0217 10:56:31.830933 86 log.go:172] (0xc00014c840) (0xc00065f360) Stream removed, broadcasting: 1\nI0217 10:56:31.830949 86 log.go:172] (0xc00014c840) (0xc0007ca000) Stream removed, broadcasting: 3\nI0217 10:56:31.830959 86 log.go:172] (0xc00014c840) (0xc0005fe000) Stream removed, broadcasting: 5\n" Feb 17 10:56:31.842: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Feb 17 10:56:31.842: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Feb 17 10:56:31.859: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Feb 17 10:56:41.898: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Feb 17 10:56:41.898: INFO: Waiting for statefulset status.replicas updated to 0 Feb 17 10:56:41.935: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999728s Feb 17 10:56:42.949: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.988850042s Feb 17 10:56:43.972: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.973949886s Feb 17 10:56:45.013: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.951458545s Feb 17 10:56:46.073: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.910795703s Feb 17 10:56:47.093: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.8502707s Feb 17 10:56:48.109: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.830951476s Feb 17 10:56:49.130: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.814611422s Feb 17 10:56:50.200: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.793312489s Feb 17 10:56:51.219: INFO: Verifying statefulset ss doesn't scale past 1 for another 723.267985ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace e2e-tests-statefulset-m4wk8 Feb 17 10:56:52.232: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-m4wk8 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 17 10:56:53.157: INFO: stderr: "I0217 10:56:52.623569 109 log.go:172] (0xc000704000) (0xc000634aa0) Create stream\nI0217 10:56:52.623829 109 log.go:172] (0xc000704000) (0xc000634aa0) Stream added, broadcasting: 1\nI0217 10:56:52.632149 109 log.go:172] (0xc000704000) Reply frame received for 1\nI0217 10:56:52.632347 109 log.go:172] (0xc000704000) (0xc000846000) Create stream\nI0217 10:56:52.632375 109 log.go:172] (0xc000704000) (0xc000846000) Stream added, broadcasting: 3\nI0217 10:56:52.659195 109 log.go:172] (0xc000704000) Reply frame received for 3\nI0217 10:56:52.659346 109 log.go:172] (0xc000704000) (0xc000832000) Create stream\nI0217 10:56:52.659379 109 log.go:172] (0xc000704000) (0xc000832000) Stream added, broadcasting: 5\nI0217 10:56:52.666214 109 log.go:172] (0xc000704000) Reply frame received for 5\nI0217 10:56:52.935767 109 log.go:172] (0xc000704000) Data frame received for 3\nI0217 10:56:52.935856 109 log.go:172] (0xc000846000) (3) Data frame handling\nI0217 10:56:52.935891 109 log.go:172] (0xc000846000) (3) Data frame sent\nI0217 10:56:53.142954 109 log.go:172] (0xc000704000) Data frame received for 1\nI0217 10:56:53.143030 109 log.go:172] (0xc000704000) (0xc000846000) Stream removed, broadcasting: 3\nI0217 10:56:53.143129 109 log.go:172] (0xc000634aa0) (1) Data frame handling\nI0217 10:56:53.143165 109 log.go:172] (0xc000634aa0) (1) Data frame sent\nI0217 10:56:53.143341 109 log.go:172] (0xc000704000) (0xc000832000) Stream removed, broadcasting: 5\nI0217 10:56:53.143409 109 log.go:172] (0xc000704000) (0xc000634aa0) Stream removed, broadcasting: 1\nI0217 10:56:53.143424 109 log.go:172] (0xc000704000) Go away received\nI0217 10:56:53.143720 109 log.go:172] (0xc000704000) (0xc000634aa0) Stream removed, broadcasting: 1\nI0217 10:56:53.143734 109 log.go:172] (0xc000704000) (0xc000846000) Stream removed, broadcasting: 3\nI0217 10:56:53.143747 109 log.go:172] (0xc000704000) (0xc000832000) Stream removed, broadcasting: 5\n" Feb 17 10:56:53.158: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Feb 17 10:56:53.158: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Feb 17 10:56:53.175: INFO: Found 1 stateful pods, waiting for 3 Feb 17 10:57:03.191: INFO: Found 2 stateful pods, waiting for 3 Feb 17 10:57:13.199: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Feb 17 10:57:13.199: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Feb 17 10:57:13.199: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Verifying that stateful set ss was scaled up in order STEP: Scale down will halt with unhealthy stateful pod Feb 17 10:57:13.253: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-m4wk8 ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Feb 17 10:57:13.787: INFO: stderr: "I0217 10:57:13.448701 130 log.go:172] (0xc000138840) (0xc00066b400) Create stream\nI0217 10:57:13.448897 130 log.go:172] (0xc000138840) (0xc00066b400) Stream added, broadcasting: 1\nI0217 10:57:13.455480 130 log.go:172] (0xc000138840) Reply frame received for 1\nI0217 10:57:13.455554 130 log.go:172] (0xc000138840) (0xc000544000) Create stream\nI0217 10:57:13.455592 130 log.go:172] (0xc000138840) (0xc000544000) Stream added, broadcasting: 3\nI0217 10:57:13.456761 130 log.go:172] (0xc000138840) Reply frame received for 3\nI0217 10:57:13.456804 130 log.go:172] (0xc000138840) (0xc0001f6000) Create stream\nI0217 10:57:13.456813 130 log.go:172] (0xc000138840) (0xc0001f6000) Stream added, broadcasting: 5\nI0217 10:57:13.457832 130 log.go:172] (0xc000138840) Reply frame received for 5\nI0217 10:57:13.636306 130 log.go:172] (0xc000138840) Data frame received for 3\nI0217 10:57:13.636351 130 log.go:172] (0xc000544000) (3) Data frame handling\nI0217 10:57:13.636367 130 log.go:172] (0xc000544000) (3) Data frame sent\nI0217 10:57:13.780022 130 log.go:172] (0xc000138840) (0xc000544000) Stream removed, broadcasting: 3\nI0217 10:57:13.780522 130 log.go:172] (0xc000138840) Data frame received for 1\nI0217 10:57:13.780594 130 log.go:172] (0xc000138840) (0xc0001f6000) Stream removed, broadcasting: 5\nI0217 10:57:13.780706 130 log.go:172] (0xc00066b400) (1) Data frame handling\nI0217 10:57:13.780807 130 log.go:172] (0xc00066b400) (1) Data frame sent\nI0217 10:57:13.780866 130 log.go:172] (0xc000138840) (0xc00066b400) Stream removed, broadcasting: 1\nI0217 10:57:13.780893 130 log.go:172] (0xc000138840) Go away received\nI0217 10:57:13.781284 130 log.go:172] (0xc000138840) (0xc00066b400) Stream removed, broadcasting: 1\nI0217 10:57:13.781357 130 log.go:172] (0xc000138840) (0xc000544000) Stream removed, broadcasting: 3\nI0217 10:57:13.781370 130 log.go:172] (0xc000138840) (0xc0001f6000) Stream removed, broadcasting: 5\n" Feb 17 10:57:13.787: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Feb 17 10:57:13.787: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Feb 17 10:57:13.787: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-m4wk8 ss-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Feb 17 10:57:14.285: INFO: stderr: "I0217 10:57:14.015156 152 log.go:172] (0xc0008862c0) (0xc000700640) Create stream\nI0217 10:57:14.015356 152 log.go:172] (0xc0008862c0) (0xc000700640) Stream added, broadcasting: 1\nI0217 10:57:14.021479 152 log.go:172] (0xc0008862c0) Reply frame received for 1\nI0217 10:57:14.021506 152 log.go:172] (0xc0008862c0) (0xc000582dc0) Create stream\nI0217 10:57:14.021512 152 log.go:172] (0xc0008862c0) (0xc000582dc0) Stream added, broadcasting: 3\nI0217 10:57:14.022905 152 log.go:172] (0xc0008862c0) Reply frame received for 3\nI0217 10:57:14.022970 152 log.go:172] (0xc0008862c0) (0xc0006d8000) Create stream\nI0217 10:57:14.022985 152 log.go:172] (0xc0008862c0) (0xc0006d8000) Stream added, broadcasting: 5\nI0217 10:57:14.023854 152 log.go:172] (0xc0008862c0) Reply frame received for 5\nI0217 10:57:14.159896 152 log.go:172] (0xc0008862c0) Data frame received for 3\nI0217 10:57:14.159944 152 log.go:172] (0xc000582dc0) (3) Data frame handling\nI0217 10:57:14.159966 152 log.go:172] (0xc000582dc0) (3) Data frame sent\nI0217 10:57:14.277547 152 log.go:172] (0xc0008862c0) (0xc000582dc0) Stream removed, broadcasting: 3\nI0217 10:57:14.277731 152 log.go:172] (0xc0008862c0) Data frame received for 1\nI0217 10:57:14.277858 152 log.go:172] (0xc0008862c0) (0xc0006d8000) Stream removed, broadcasting: 5\nI0217 10:57:14.277943 152 log.go:172] (0xc000700640) (1) Data frame handling\nI0217 10:57:14.277985 152 log.go:172] (0xc000700640) (1) Data frame sent\nI0217 10:57:14.278202 152 log.go:172] (0xc0008862c0) (0xc000700640) Stream removed, broadcasting: 1\nI0217 10:57:14.278293 152 log.go:172] (0xc0008862c0) Go away received\nI0217 10:57:14.278702 152 log.go:172] (0xc0008862c0) (0xc000700640) Stream removed, broadcasting: 1\nI0217 10:57:14.278723 152 log.go:172] (0xc0008862c0) (0xc000582dc0) Stream removed, broadcasting: 3\nI0217 10:57:14.278731 152 log.go:172] (0xc0008862c0) (0xc0006d8000) Stream removed, broadcasting: 5\n" Feb 17 10:57:14.286: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Feb 17 10:57:14.286: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Feb 17 10:57:14.286: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-m4wk8 ss-2 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Feb 17 10:57:14.924: INFO: stderr: "I0217 10:57:14.499627 173 log.go:172] (0xc00015c6e0) (0xc000734640) Create stream\nI0217 10:57:14.499878 173 log.go:172] (0xc00015c6e0) (0xc000734640) Stream added, broadcasting: 1\nI0217 10:57:14.506997 173 log.go:172] (0xc00015c6e0) Reply frame received for 1\nI0217 10:57:14.507079 173 log.go:172] (0xc00015c6e0) (0xc000662d20) Create stream\nI0217 10:57:14.507103 173 log.go:172] (0xc00015c6e0) (0xc000662d20) Stream added, broadcasting: 3\nI0217 10:57:14.510076 173 log.go:172] (0xc00015c6e0) Reply frame received for 3\nI0217 10:57:14.510131 173 log.go:172] (0xc00015c6e0) (0xc0006a0000) Create stream\nI0217 10:57:14.510151 173 log.go:172] (0xc00015c6e0) (0xc0006a0000) Stream added, broadcasting: 5\nI0217 10:57:14.511589 173 log.go:172] (0xc00015c6e0) Reply frame received for 5\nI0217 10:57:14.807943 173 log.go:172] (0xc00015c6e0) Data frame received for 3\nI0217 10:57:14.807985 173 log.go:172] (0xc000662d20) (3) Data frame handling\nI0217 10:57:14.808001 173 log.go:172] (0xc000662d20) (3) Data frame sent\nI0217 10:57:14.917650 173 log.go:172] (0xc00015c6e0) (0xc000662d20) Stream removed, broadcasting: 3\nI0217 10:57:14.917847 173 log.go:172] (0xc00015c6e0) Data frame received for 1\nI0217 10:57:14.917870 173 log.go:172] (0xc000734640) (1) Data frame handling\nI0217 10:57:14.917952 173 log.go:172] (0xc000734640) (1) Data frame sent\nI0217 10:57:14.917964 173 log.go:172] (0xc00015c6e0) (0xc000734640) Stream removed, broadcasting: 1\nI0217 10:57:14.918133 173 log.go:172] (0xc00015c6e0) (0xc0006a0000) Stream removed, broadcasting: 5\nI0217 10:57:14.918287 173 log.go:172] (0xc00015c6e0) Go away received\nI0217 10:57:14.918494 173 log.go:172] (0xc00015c6e0) (0xc000734640) Stream removed, broadcasting: 1\nI0217 10:57:14.918517 173 log.go:172] (0xc00015c6e0) (0xc000662d20) Stream removed, broadcasting: 3\nI0217 10:57:14.918527 173 log.go:172] (0xc00015c6e0) (0xc0006a0000) Stream removed, broadcasting: 5\n" Feb 17 10:57:14.924: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Feb 17 10:57:14.924: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Feb 17 10:57:14.924: INFO: Waiting for statefulset status.replicas updated to 0 Feb 17 10:57:14.933: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 1 Feb 17 10:57:24.966: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Feb 17 10:57:24.966: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Feb 17 10:57:24.966: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Feb 17 10:57:24.995: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999711s Feb 17 10:57:26.044: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.985808757s Feb 17 10:57:27.059: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.936404193s Feb 17 10:57:28.119: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.922711299s Feb 17 10:57:29.151: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.862047079s Feb 17 10:57:30.202: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.830214656s Feb 17 10:57:31.330: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.778534519s Feb 17 10:57:32.388: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.650149958s Feb 17 10:57:33.410: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.592836802s Feb 17 10:57:34.434: INFO: Verifying statefulset ss doesn't scale past 3 for another 570.498679ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacee2e-tests-statefulset-m4wk8 Feb 17 10:57:35.479: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-m4wk8 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 17 10:57:36.038: INFO: stderr: "I0217 10:57:35.732532 195 log.go:172] (0xc000138790) (0xc0005db400) Create stream\nI0217 10:57:35.732805 195 log.go:172] (0xc000138790) (0xc0005db400) Stream added, broadcasting: 1\nI0217 10:57:35.737939 195 log.go:172] (0xc000138790) Reply frame received for 1\nI0217 10:57:35.737971 195 log.go:172] (0xc000138790) (0xc000712000) Create stream\nI0217 10:57:35.737981 195 log.go:172] (0xc000138790) (0xc000712000) Stream added, broadcasting: 3\nI0217 10:57:35.738856 195 log.go:172] (0xc000138790) Reply frame received for 3\nI0217 10:57:35.738900 195 log.go:172] (0xc000138790) (0xc0006e4000) Create stream\nI0217 10:57:35.738907 195 log.go:172] (0xc000138790) (0xc0006e4000) Stream added, broadcasting: 5\nI0217 10:57:35.739724 195 log.go:172] (0xc000138790) Reply frame received for 5\nI0217 10:57:35.864611 195 log.go:172] (0xc000138790) Data frame received for 3\nI0217 10:57:35.864742 195 log.go:172] (0xc000712000) (3) Data frame handling\nI0217 10:57:35.864768 195 log.go:172] (0xc000712000) (3) Data frame sent\nI0217 10:57:36.021789 195 log.go:172] (0xc000138790) Data frame received for 1\nI0217 10:57:36.022037 195 log.go:172] (0xc0005db400) (1) Data frame handling\nI0217 10:57:36.022125 195 log.go:172] (0xc0005db400) (1) Data frame sent\nI0217 10:57:36.022496 195 log.go:172] (0xc000138790) (0xc000712000) Stream removed, broadcasting: 3\nI0217 10:57:36.022890 195 log.go:172] (0xc000138790) (0xc0005db400) Stream removed, broadcasting: 1\nI0217 10:57:36.023357 195 log.go:172] (0xc000138790) (0xc0006e4000) Stream removed, broadcasting: 5\nI0217 10:57:36.023616 195 log.go:172] (0xc000138790) Go away received\nI0217 10:57:36.024004 195 log.go:172] (0xc000138790) (0xc0005db400) Stream removed, broadcasting: 1\nI0217 10:57:36.024030 195 log.go:172] (0xc000138790) (0xc000712000) Stream removed, broadcasting: 3\nI0217 10:57:36.024046 195 log.go:172] (0xc000138790) (0xc0006e4000) Stream removed, broadcasting: 5\n" Feb 17 10:57:36.038: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Feb 17 10:57:36.038: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Feb 17 10:57:36.038: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-m4wk8 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 17 10:57:36.926: INFO: stderr: "I0217 10:57:36.327603 218 log.go:172] (0xc000162d10) (0xc000377a40) Create stream\nI0217 10:57:36.327820 218 log.go:172] (0xc000162d10) (0xc000377a40) Stream added, broadcasting: 1\nI0217 10:57:36.345925 218 log.go:172] (0xc000162d10) Reply frame received for 1\nI0217 10:57:36.345978 218 log.go:172] (0xc000162d10) (0xc000376dc0) Create stream\nI0217 10:57:36.345995 218 log.go:172] (0xc000162d10) (0xc000376dc0) Stream added, broadcasting: 3\nI0217 10:57:36.347784 218 log.go:172] (0xc000162d10) Reply frame received for 3\nI0217 10:57:36.347828 218 log.go:172] (0xc000162d10) (0xc000376f00) Create stream\nI0217 10:57:36.347840 218 log.go:172] (0xc000162d10) (0xc000376f00) Stream added, broadcasting: 5\nI0217 10:57:36.348896 218 log.go:172] (0xc000162d10) Reply frame received for 5\nI0217 10:57:36.570268 218 log.go:172] (0xc000162d10) Data frame received for 3\nI0217 10:57:36.570433 218 log.go:172] (0xc000376dc0) (3) Data frame handling\nI0217 10:57:36.570469 218 log.go:172] (0xc000376dc0) (3) Data frame sent\nI0217 10:57:36.920331 218 log.go:172] (0xc000162d10) (0xc000376dc0) Stream removed, broadcasting: 3\nI0217 10:57:36.920445 218 log.go:172] (0xc000162d10) Data frame received for 1\nI0217 10:57:36.920452 218 log.go:172] (0xc000377a40) (1) Data frame handling\nI0217 10:57:36.920457 218 log.go:172] (0xc000377a40) (1) Data frame sent\nI0217 10:57:36.920655 218 log.go:172] (0xc000162d10) (0xc000377a40) Stream removed, broadcasting: 1\nI0217 10:57:36.920828 218 log.go:172] (0xc000162d10) (0xc000376f00) Stream removed, broadcasting: 5\nI0217 10:57:36.920851 218 log.go:172] (0xc000162d10) Go away received\nI0217 10:57:36.921087 218 log.go:172] (0xc000162d10) (0xc000377a40) Stream removed, broadcasting: 1\nI0217 10:57:36.921160 218 log.go:172] (0xc000162d10) (0xc000376dc0) Stream removed, broadcasting: 3\nI0217 10:57:36.921184 218 log.go:172] (0xc000162d10) (0xc000376f00) Stream removed, broadcasting: 5\n" Feb 17 10:57:36.926: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Feb 17 10:57:36.926: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Feb 17 10:57:36.927: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-m4wk8 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 17 10:57:37.344: INFO: rc: 126 Feb 17 10:57:37.345: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-m4wk8 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] OCI runtime exec failed: exec failed: cannot exec a container that has stopped: unknown I0217 10:57:37.119769 239 log.go:172] (0xc0006dc370) (0xc000708820) Create stream I0217 10:57:37.119906 239 log.go:172] (0xc0006dc370) (0xc000708820) Stream added, broadcasting: 1 I0217 10:57:37.125276 239 log.go:172] (0xc0006dc370) Reply frame received for 1 I0217 10:57:37.125303 239 log.go:172] (0xc0006dc370) (0xc000554b40) Create stream I0217 10:57:37.125309 239 log.go:172] (0xc0006dc370) (0xc000554b40) Stream added, broadcasting: 3 I0217 10:57:37.126421 239 log.go:172] (0xc0006dc370) Reply frame received for 3 I0217 10:57:37.126436 239 log.go:172] (0xc0006dc370) (0xc0007088c0) Create stream I0217 10:57:37.126440 239 log.go:172] (0xc0006dc370) (0xc0007088c0) Stream added, broadcasting: 5 I0217 10:57:37.127731 239 log.go:172] (0xc0006dc370) Reply frame received for 5 I0217 10:57:37.330664 239 log.go:172] (0xc0006dc370) Data frame received for 3 I0217 10:57:37.330751 239 log.go:172] (0xc000554b40) (3) Data frame handling I0217 10:57:37.330779 239 log.go:172] (0xc000554b40) (3) Data frame sent I0217 10:57:37.336141 239 log.go:172] (0xc0006dc370) Data frame received for 1 I0217 10:57:37.336155 239 log.go:172] (0xc000708820) (1) Data frame handling I0217 10:57:37.336163 239 log.go:172] (0xc000708820) (1) Data frame sent I0217 10:57:37.336594 239 log.go:172] (0xc0006dc370) (0xc000708820) Stream removed, broadcasting: 1 I0217 10:57:37.337862 239 log.go:172] (0xc0006dc370) (0xc000554b40) Stream removed, broadcasting: 3 I0217 10:57:37.338192 239 log.go:172] (0xc0006dc370) (0xc0007088c0) Stream removed, broadcasting: 5 I0217 10:57:37.338222 239 log.go:172] (0xc0006dc370) Go away received I0217 10:57:37.338274 239 log.go:172] (0xc0006dc370) (0xc000708820) Stream removed, broadcasting: 1 I0217 10:57:37.338292 239 log.go:172] (0xc0006dc370) (0xc000554b40) Stream removed, broadcasting: 3 I0217 10:57:37.338306 239 log.go:172] (0xc0006dc370) (0xc0007088c0) Stream removed, broadcasting: 5 command terminated with exit code 126 [] 0xc0018b4f30 exit status 126 true [0xc0019a6750 0xc0019a6768 0xc0019a6780] [0xc0019a6750 0xc0019a6768 0xc0019a6780] [0xc0019a6760 0xc0019a6778] [0x935700 0x935700] 0xc0018c3320 }: Command stdout: OCI runtime exec failed: exec failed: cannot exec a container that has stopped: unknown stderr: I0217 10:57:37.119769 239 log.go:172] (0xc0006dc370) (0xc000708820) Create stream I0217 10:57:37.119906 239 log.go:172] (0xc0006dc370) (0xc000708820) Stream added, broadcasting: 1 I0217 10:57:37.125276 239 log.go:172] (0xc0006dc370) Reply frame received for 1 I0217 10:57:37.125303 239 log.go:172] (0xc0006dc370) (0xc000554b40) Create stream I0217 10:57:37.125309 239 log.go:172] (0xc0006dc370) (0xc000554b40) Stream added, broadcasting: 3 I0217 10:57:37.126421 239 log.go:172] (0xc0006dc370) Reply frame received for 3 I0217 10:57:37.126436 239 log.go:172] (0xc0006dc370) (0xc0007088c0) Create stream I0217 10:57:37.126440 239 log.go:172] (0xc0006dc370) (0xc0007088c0) Stream added, broadcasting: 5 I0217 10:57:37.127731 239 log.go:172] (0xc0006dc370) Reply frame received for 5 I0217 10:57:37.330664 239 log.go:172] (0xc0006dc370) Data frame received for 3 I0217 10:57:37.330751 239 log.go:172] (0xc000554b40) (3) Data frame handling I0217 10:57:37.330779 239 log.go:172] (0xc000554b40) (3) Data frame sent I0217 10:57:37.336141 239 log.go:172] (0xc0006dc370) Data frame received for 1 I0217 10:57:37.336155 239 log.go:172] (0xc000708820) (1) Data frame handling I0217 10:57:37.336163 239 log.go:172] (0xc000708820) (1) Data frame sent I0217 10:57:37.336594 239 log.go:172] (0xc0006dc370) (0xc000708820) Stream removed, broadcasting: 1 I0217 10:57:37.337862 239 log.go:172] (0xc0006dc370) (0xc000554b40) Stream removed, broadcasting: 3 I0217 10:57:37.338192 239 log.go:172] (0xc0006dc370) (0xc0007088c0) Stream removed, broadcasting: 5 I0217 10:57:37.338222 239 log.go:172] (0xc0006dc370) Go away received I0217 10:57:37.338274 239 log.go:172] (0xc0006dc370) (0xc000708820) Stream removed, broadcasting: 1 I0217 10:57:37.338292 239 log.go:172] (0xc0006dc370) (0xc000554b40) Stream removed, broadcasting: 3 I0217 10:57:37.338306 239 log.go:172] (0xc0006dc370) (0xc0007088c0) Stream removed, broadcasting: 5 command terminated with exit code 126 error: exit status 126 Feb 17 10:57:47.345: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-m4wk8 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 17 10:57:47.503: INFO: rc: 1 Feb 17 10:57:47.504: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-m4wk8 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc001d0e4e0 exit status 1 true [0xc00098a698 0xc00098a6b0 0xc00098a6c8] [0xc00098a698 0xc00098a6b0 0xc00098a6c8] [0xc00098a6a8 0xc00098a6c0] [0x935700 0x935700] 0xc00179a600 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Feb 17 10:57:57.505: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-m4wk8 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 17 10:57:57.693: INFO: rc: 1 Feb 17 10:57:57.694: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-m4wk8 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc001b543c0 exit status 1 true [0xc00000ebe0 0xc00000eca0 0xc00000ed58] [0xc00000ebe0 0xc00000eca0 0xc00000ed58] [0xc00000ec88 0xc00000ed30] [0x935700 0x935700] 0xc0021941e0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Feb 17 10:58:07.694: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-m4wk8 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 17 10:58:07.895: INFO: rc: 1 Feb 17 10:58:07.895: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-m4wk8 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc001384120 exit status 1 true [0xc000a58040 0xc000a58110 0xc000a58228] [0xc000a58040 0xc000a58110 0xc000a58228] [0xc000a580d8 0xc000a581f8] [0x935700 0x935700] 0xc001b5c1e0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Feb 17 10:58:17.896: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-m4wk8 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 17 10:58:18.048: INFO: rc: 1 Feb 17 10:58:18.048: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-m4wk8 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc001328120 exit status 1 true [0xc00187e000 0xc00187e018 0xc00187e030] [0xc00187e000 0xc00187e018 0xc00187e030] [0xc00187e010 0xc00187e028] [0x935700 0x935700] 0xc0019243c0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Feb 17 10:58:28.049: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-m4wk8 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 17 10:58:28.188: INFO: rc: 1 Feb 17 10:58:28.189: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-m4wk8 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc001384270 exit status 1 true [0xc000a58230 0xc000a582a0 0xc000a582c0] [0xc000a58230 0xc000a582a0 0xc000a582c0] [0xc000a58268 0xc000a582b8] [0x935700 0x935700] 0xc001b5c4e0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Feb 17 10:58:38.190: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-m4wk8 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 17 10:58:38.371: INFO: rc: 1 Feb 17 10:58:38.372: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-m4wk8 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc001384390 exit status 1 true [0xc000a582f8 0xc000a58348 0xc000a58370] [0xc000a582f8 0xc000a58348 0xc000a58370] [0xc000a58340 0xc000a58368] [0x935700 0x935700] 0xc001b5c780 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Feb 17 10:58:48.373: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-m4wk8 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 17 10:58:48.596: INFO: rc: 1 Feb 17 10:58:48.597: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-m4wk8 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc0013844b0 exit status 1 true [0xc000a58380 0xc000a58398 0xc000a583b0] [0xc000a58380 0xc000a58398 0xc000a583b0] [0xc000a58390 0xc000a583a8] [0x935700 0x935700] 0xc001b5ca20 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Feb 17 10:58:58.598: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-m4wk8 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 17 10:58:58.712: INFO: rc: 1 Feb 17 10:58:58.712: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-m4wk8 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc0013845d0 exit status 1 true [0xc000a583b8 0xc000a583f8 0xc000a58460] [0xc000a583b8 0xc000a583f8 0xc000a58460] [0xc000a583f0 0xc000a58438] [0x935700 0x935700] 0xc001b5ccc0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Feb 17 10:59:08.713: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-m4wk8 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 17 10:59:08.861: INFO: rc: 1 Feb 17 10:59:08.861: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-m4wk8 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc001238120 exit status 1 true [0xc0019a6000 0xc0019a6018 0xc0019a6030] [0xc0019a6000 0xc0019a6018 0xc0019a6030] [0xc0019a6010 0xc0019a6028] [0x935700 0x935700] 0xc001e109c0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Feb 17 10:59:18.862: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-m4wk8 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 17 10:59:19.005: INFO: rc: 1 Feb 17 10:59:19.006: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-m4wk8 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc0013846f0 exit status 1 true [0xc000a58470 0xc000a584a8 0xc000a584e0] [0xc000a58470 0xc000a584a8 0xc000a584e0] [0xc000a584a0 0xc000a584d0] [0x935700 0x935700] 0xc001b5d020 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Feb 17 10:59:29.007: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-m4wk8 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 17 10:59:29.203: INFO: rc: 1 Feb 17 10:59:29.203: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-m4wk8 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc001384810 exit status 1 true [0xc000a584e8 0xc000a58520 0xc000a58548] [0xc000a584e8 0xc000a58520 0xc000a58548] [0xc000a58518 0xc000a58538] [0x935700 0x935700] 0xc001b5d320 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Feb 17 10:59:39.204: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-m4wk8 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 17 10:59:39.346: INFO: rc: 1 Feb 17 10:59:39.346: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-m4wk8 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc001b54570 exit status 1 true [0xc00000ed68 0xc00000ee08 0xc00000ee70] [0xc00000ed68 0xc00000ee08 0xc00000ee70] [0xc00000ee00 0xc00000ee48] [0x935700 0x935700] 0xc002194480 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Feb 17 10:59:49.347: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-m4wk8 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 17 10:59:49.480: INFO: rc: 1 Feb 17 10:59:49.480: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-m4wk8 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc001384960 exit status 1 true [0xc000a58570 0xc000a585c0 0xc000a585f8] [0xc000a58570 0xc000a585c0 0xc000a585f8] [0xc000a585b8 0xc000a585f0] [0x935700 0x935700] 0xc001b5d5c0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Feb 17 10:59:59.481: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-m4wk8 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 17 10:59:59.639: INFO: rc: 1 Feb 17 10:59:59.640: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-m4wk8 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc001328150 exit status 1 true [0xc00187e008 0xc00187e020 0xc00187e038] [0xc00187e008 0xc00187e020 0xc00187e038] [0xc00187e018 0xc00187e030] [0x935700 0x935700] 0xc0019243c0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Feb 17 11:00:09.641: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-m4wk8 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 17 11:00:09.809: INFO: rc: 1 Feb 17 11:00:09.809: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-m4wk8 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc001b543f0 exit status 1 true [0xc00000ebe0 0xc00000eca0 0xc00000ed58] [0xc00000ebe0 0xc00000eca0 0xc00000ed58] [0xc00000ec88 0xc00000ed30] [0x935700 0x935700] 0xc001e109c0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Feb 17 11:00:19.810: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-m4wk8 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 17 11:00:19.963: INFO: rc: 1 Feb 17 11:00:19.964: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-m4wk8 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc001b54540 exit status 1 true [0xc00000ed68 0xc00000ee08 0xc00000ee70] [0xc00000ed68 0xc00000ee08 0xc00000ee70] [0xc00000ee00 0xc00000ee48] [0x935700 0x935700] 0xc001e10c60 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Feb 17 11:00:29.964: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-m4wk8 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 17 11:00:30.120: INFO: rc: 1 Feb 17 11:00:30.120: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-m4wk8 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc001384150 exit status 1 true [0xc0019a6008 0xc0019a6020 0xc0019a6038] [0xc0019a6008 0xc0019a6020 0xc0019a6038] [0xc0019a6018 0xc0019a6030] [0x935700 0x935700] 0xc0021941e0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Feb 17 11:00:40.121: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-m4wk8 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 17 11:00:40.287: INFO: rc: 1 Feb 17 11:00:40.287: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-m4wk8 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc0013282a0 exit status 1 true [0xc00187e040 0xc00187e058 0xc00187e070] [0xc00187e040 0xc00187e058 0xc00187e070] [0xc00187e050 0xc00187e068] [0x935700 0x935700] 0xc0019246c0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Feb 17 11:00:50.288: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-m4wk8 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 17 11:00:50.445: INFO: rc: 1 Feb 17 11:00:50.445: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-m4wk8 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc001238180 exit status 1 true [0xc000a58040 0xc000a58110 0xc000a58228] [0xc000a58040 0xc000a58110 0xc000a58228] [0xc000a580d8 0xc000a581f8] [0x935700 0x935700] 0xc001b5c1e0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Feb 17 11:01:00.446: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-m4wk8 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 17 11:01:00.593: INFO: rc: 1 Feb 17 11:01:00.593: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-m4wk8 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc0012382d0 exit status 1 true [0xc000a58230 0xc000a582a0 0xc000a582c0] [0xc000a58230 0xc000a582a0 0xc000a582c0] [0xc000a58268 0xc000a582b8] [0x935700 0x935700] 0xc001b5c4e0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Feb 17 11:01:10.594: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-m4wk8 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 17 11:01:10.733: INFO: rc: 1 Feb 17 11:01:10.733: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-m4wk8 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc001b54690 exit status 1 true [0xc00000ee98 0xc00000ef10 0xc00000efb8] [0xc00000ee98 0xc00000ef10 0xc00000efb8] [0xc00000eed0 0xc00000ef70] [0x935700 0x935700] 0xc001e10f00 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Feb 17 11:01:20.734: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-m4wk8 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 17 11:01:20.856: INFO: rc: 1 Feb 17 11:01:20.856: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-m4wk8 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc0013283f0 exit status 1 true [0xc00187e078 0xc00187e090 0xc00187e0a8] [0xc00187e078 0xc00187e090 0xc00187e0a8] [0xc00187e088 0xc00187e0a0] [0x935700 0x935700] 0xc001924960 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Feb 17 11:01:30.857: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-m4wk8 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 17 11:01:30.996: INFO: rc: 1 Feb 17 11:01:30.997: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-m4wk8 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc001b54810 exit status 1 true [0xc00000f0a8 0xc00000f198 0xc00000f1e8] [0xc00000f0a8 0xc00000f198 0xc00000f1e8] [0xc00000f178 0xc00000f1e0] [0x935700 0x935700] 0xc001e111a0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Feb 17 11:01:40.997: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-m4wk8 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 17 11:01:41.135: INFO: rc: 1 Feb 17 11:01:41.136: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-m4wk8 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc001b54930 exit status 1 true [0xc00000f1f0 0xc00000f260 0xc00000f2e8] [0xc00000f1f0 0xc00000f260 0xc00000f2e8] [0xc00000f248 0xc00000f2d8] [0x935700 0x935700] 0xc001e11440 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Feb 17 11:01:51.136: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-m4wk8 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 17 11:01:51.294: INFO: rc: 1 Feb 17 11:01:51.294: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-m4wk8 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc001238780 exit status 1 true [0xc000a582f8 0xc000a58348 0xc000a58370] [0xc000a582f8 0xc000a58348 0xc000a58370] [0xc000a58340 0xc000a58368] [0x935700 0x935700] 0xc001b5c780 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Feb 17 11:02:01.295: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-m4wk8 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 17 11:02:01.414: INFO: rc: 1 Feb 17 11:02:01.414: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-m4wk8 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc001384120 exit status 1 true [0xc0019a6008 0xc0019a6020 0xc0019a6038] [0xc0019a6008 0xc0019a6020 0xc0019a6038] [0xc0019a6018 0xc0019a6030] [0x935700 0x935700] 0xc0021941e0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Feb 17 11:02:11.415: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-m4wk8 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 17 11:02:11.561: INFO: rc: 1 Feb 17 11:02:11.561: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-m4wk8 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc001328120 exit status 1 true [0xc00000ebe0 0xc00000eca0 0xc00000ed58] [0xc00000ebe0 0xc00000eca0 0xc00000ed58] [0xc00000ec88 0xc00000ed30] [0x935700 0x935700] 0xc001e109c0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Feb 17 11:02:21.561: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-m4wk8 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 17 11:02:21.707: INFO: rc: 1 Feb 17 11:02:21.708: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-m4wk8 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc001b543c0 exit status 1 true [0xc00187e000 0xc00187e018 0xc00187e030] [0xc00187e000 0xc00187e018 0xc00187e030] [0xc00187e010 0xc00187e028] [0x935700 0x935700] 0xc0019243c0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Feb 17 11:02:31.708: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-m4wk8 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 17 11:02:31.858: INFO: rc: 1 Feb 17 11:02:31.858: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-m4wk8 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc001b54570 exit status 1 true [0xc00187e038 0xc00187e050 0xc00187e068] [0xc00187e038 0xc00187e050 0xc00187e068] [0xc00187e048 0xc00187e060] [0x935700 0x935700] 0xc0019246c0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Feb 17 11:02:41.859: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-m4wk8 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 17 11:02:42.053: INFO: rc: 1 Feb 17 11:02:42.054: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: Feb 17 11:02:42.054: INFO: Scaling statefulset ss to 0 STEP: Verifying that stateful set ss was scaled down in reverse order [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85 Feb 17 11:02:42.111: INFO: Deleting all statefulset in ns e2e-tests-statefulset-m4wk8 Feb 17 11:02:42.119: INFO: Scaling statefulset ss to 0 Feb 17 11:02:42.142: INFO: Waiting for statefulset status.replicas updated to 0 Feb 17 11:02:42.144: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 17 11:02:42.174: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-statefulset-m4wk8" for this suite. Feb 17 11:02:48.269: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 17 11:02:48.313: INFO: namespace: e2e-tests-statefulset-m4wk8, resource: bindings, ignored listing per whitelist Feb 17 11:02:48.427: INFO: namespace e2e-tests-statefulset-m4wk8 deletion completed in 6.247241765s • [SLOW TEST:387.630 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 17 11:02:48.427: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74 STEP: Creating service test in namespace e2e-tests-statefulset-6srct [It] Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating stateful set ss in namespace e2e-tests-statefulset-6srct STEP: Waiting until all stateful set ss replicas will be running in namespace e2e-tests-statefulset-6srct Feb 17 11:02:48.907: INFO: Found 0 stateful pods, waiting for 1 Feb 17 11:02:58.927: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false Feb 17 11:03:08.920: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod Feb 17 11:03:08.924: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6srct ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Feb 17 11:03:09.536: INFO: stderr: "I0217 11:03:09.104587 865 log.go:172] (0xc0007a8210) (0xc00032cc80) Create stream\nI0217 11:03:09.104895 865 log.go:172] (0xc0007a8210) (0xc00032cc80) Stream added, broadcasting: 1\nI0217 11:03:09.111522 865 log.go:172] (0xc0007a8210) Reply frame received for 1\nI0217 11:03:09.111578 865 log.go:172] (0xc0007a8210) (0xc0007ca000) Create stream\nI0217 11:03:09.111592 865 log.go:172] (0xc0007a8210) (0xc0007ca000) Stream added, broadcasting: 3\nI0217 11:03:09.112665 865 log.go:172] (0xc0007a8210) Reply frame received for 3\nI0217 11:03:09.112694 865 log.go:172] (0xc0007a8210) (0xc00081e000) Create stream\nI0217 11:03:09.112704 865 log.go:172] (0xc0007a8210) (0xc00081e000) Stream added, broadcasting: 5\nI0217 11:03:09.113685 865 log.go:172] (0xc0007a8210) Reply frame received for 5\nI0217 11:03:09.241853 865 log.go:172] (0xc0007a8210) Data frame received for 3\nI0217 11:03:09.241920 865 log.go:172] (0xc0007ca000) (3) Data frame handling\nI0217 11:03:09.241942 865 log.go:172] (0xc0007ca000) (3) Data frame sent\nI0217 11:03:09.521765 865 log.go:172] (0xc0007a8210) Data frame received for 1\nI0217 11:03:09.522035 865 log.go:172] (0xc0007a8210) (0xc0007ca000) Stream removed, broadcasting: 3\nI0217 11:03:09.522155 865 log.go:172] (0xc00032cc80) (1) Data frame handling\nI0217 11:03:09.522187 865 log.go:172] (0xc00032cc80) (1) Data frame sent\nI0217 11:03:09.522198 865 log.go:172] (0xc0007a8210) (0xc00081e000) Stream removed, broadcasting: 5\nI0217 11:03:09.522253 865 log.go:172] (0xc0007a8210) (0xc00032cc80) Stream removed, broadcasting: 1\nI0217 11:03:09.522287 865 log.go:172] (0xc0007a8210) Go away received\nI0217 11:03:09.522748 865 log.go:172] (0xc0007a8210) (0xc00032cc80) Stream removed, broadcasting: 1\nI0217 11:03:09.522975 865 log.go:172] (0xc0007a8210) (0xc0007ca000) Stream removed, broadcasting: 3\nI0217 11:03:09.523059 865 log.go:172] (0xc0007a8210) (0xc00081e000) Stream removed, broadcasting: 5\n" Feb 17 11:03:09.536: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Feb 17 11:03:09.536: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Feb 17 11:03:09.562: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Feb 17 11:03:09.562: INFO: Waiting for statefulset status.replicas updated to 0 Feb 17 11:03:09.628: INFO: POD NODE PHASE GRACE CONDITIONS Feb 17 11:03:09.629: INFO: ss-0 hunter-server-hu5at5svl7ps Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-17 11:02:49 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-17 11:03:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-17 11:03:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-17 11:02:48 +0000 UTC }] Feb 17 11:03:09.629: INFO: Feb 17 11:03:09.629: INFO: StatefulSet ss has not reached scale 3, at 1 Feb 17 11:03:11.323: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.978451655s Feb 17 11:03:12.429: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.283784375s Feb 17 11:03:13.576: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.177772019s Feb 17 11:03:14.609: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.031439952s Feb 17 11:03:16.823: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.997853022s Feb 17 11:03:18.330: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.783364397s Feb 17 11:03:19.382: INFO: Verifying statefulset ss doesn't scale past 3 for another 276.841469ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace e2e-tests-statefulset-6srct Feb 17 11:03:20.396: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6srct ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 17 11:03:21.449: INFO: stderr: "I0217 11:03:20.707097 887 log.go:172] (0xc00071c370) (0xc0007c8640) Create stream\nI0217 11:03:20.707258 887 log.go:172] (0xc00071c370) (0xc0007c8640) Stream added, broadcasting: 1\nI0217 11:03:20.729648 887 log.go:172] (0xc00071c370) Reply frame received for 1\nI0217 11:03:20.729779 887 log.go:172] (0xc00071c370) (0xc0005bac80) Create stream\nI0217 11:03:20.729812 887 log.go:172] (0xc00071c370) (0xc0005bac80) Stream added, broadcasting: 3\nI0217 11:03:20.731163 887 log.go:172] (0xc00071c370) Reply frame received for 3\nI0217 11:03:20.731228 887 log.go:172] (0xc00071c370) (0xc00002a000) Create stream\nI0217 11:03:20.731246 887 log.go:172] (0xc00071c370) (0xc00002a000) Stream added, broadcasting: 5\nI0217 11:03:20.732906 887 log.go:172] (0xc00071c370) Reply frame received for 5\nI0217 11:03:21.182991 887 log.go:172] (0xc00071c370) Data frame received for 3\nI0217 11:03:21.183038 887 log.go:172] (0xc0005bac80) (3) Data frame handling\nI0217 11:03:21.183058 887 log.go:172] (0xc0005bac80) (3) Data frame sent\nI0217 11:03:21.436248 887 log.go:172] (0xc00071c370) (0xc0005bac80) Stream removed, broadcasting: 3\nI0217 11:03:21.436528 887 log.go:172] (0xc00071c370) Data frame received for 1\nI0217 11:03:21.436544 887 log.go:172] (0xc0007c8640) (1) Data frame handling\nI0217 11:03:21.436572 887 log.go:172] (0xc0007c8640) (1) Data frame sent\nI0217 11:03:21.436593 887 log.go:172] (0xc00071c370) (0xc0007c8640) Stream removed, broadcasting: 1\nI0217 11:03:21.437020 887 log.go:172] (0xc00071c370) (0xc00002a000) Stream removed, broadcasting: 5\nI0217 11:03:21.437056 887 log.go:172] (0xc00071c370) (0xc0007c8640) Stream removed, broadcasting: 1\nI0217 11:03:21.437079 887 log.go:172] (0xc00071c370) (0xc0005bac80) Stream removed, broadcasting: 3\nI0217 11:03:21.437090 887 log.go:172] (0xc00071c370) (0xc00002a000) Stream removed, broadcasting: 5\nI0217 11:03:21.437517 887 log.go:172] (0xc00071c370) Go away received\n" Feb 17 11:03:21.449: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Feb 17 11:03:21.449: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Feb 17 11:03:21.450: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6srct ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 17 11:03:22.059: INFO: stderr: "I0217 11:03:21.697260 908 log.go:172] (0xc000714370) (0xc000796640) Create stream\nI0217 11:03:21.697446 908 log.go:172] (0xc000714370) (0xc000796640) Stream added, broadcasting: 1\nI0217 11:03:21.706749 908 log.go:172] (0xc000714370) Reply frame received for 1\nI0217 11:03:21.706798 908 log.go:172] (0xc000714370) (0xc00064ebe0) Create stream\nI0217 11:03:21.706806 908 log.go:172] (0xc000714370) (0xc00064ebe0) Stream added, broadcasting: 3\nI0217 11:03:21.708587 908 log.go:172] (0xc000714370) Reply frame received for 3\nI0217 11:03:21.708655 908 log.go:172] (0xc000714370) (0xc000510000) Create stream\nI0217 11:03:21.708672 908 log.go:172] (0xc000714370) (0xc000510000) Stream added, broadcasting: 5\nI0217 11:03:21.710210 908 log.go:172] (0xc000714370) Reply frame received for 5\nI0217 11:03:21.835405 908 log.go:172] (0xc000714370) Data frame received for 3\nI0217 11:03:21.835474 908 log.go:172] (0xc00064ebe0) (3) Data frame handling\nI0217 11:03:21.835525 908 log.go:172] (0xc00064ebe0) (3) Data frame sent\nI0217 11:03:21.835649 908 log.go:172] (0xc000714370) Data frame received for 5\nI0217 11:03:21.835679 908 log.go:172] (0xc000510000) (5) Data frame handling\nI0217 11:03:21.835704 908 log.go:172] (0xc000510000) (5) Data frame sent\nmv: can't rename '/tmp/index.html': No such file or directory\nI0217 11:03:22.052857 908 log.go:172] (0xc000714370) (0xc00064ebe0) Stream removed, broadcasting: 3\nI0217 11:03:22.052977 908 log.go:172] (0xc000714370) Data frame received for 1\nI0217 11:03:22.053023 908 log.go:172] (0xc000714370) (0xc000510000) Stream removed, broadcasting: 5\nI0217 11:03:22.053084 908 log.go:172] (0xc000796640) (1) Data frame handling\nI0217 11:03:22.053143 908 log.go:172] (0xc000796640) (1) Data frame sent\nI0217 11:03:22.053162 908 log.go:172] (0xc000714370) (0xc000796640) Stream removed, broadcasting: 1\nI0217 11:03:22.053173 908 log.go:172] (0xc000714370) Go away received\nI0217 11:03:22.053353 908 log.go:172] (0xc000714370) (0xc000796640) Stream removed, broadcasting: 1\nI0217 11:03:22.053363 908 log.go:172] (0xc000714370) (0xc00064ebe0) Stream removed, broadcasting: 3\nI0217 11:03:22.053369 908 log.go:172] (0xc000714370) (0xc000510000) Stream removed, broadcasting: 5\n" Feb 17 11:03:22.059: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Feb 17 11:03:22.059: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Feb 17 11:03:22.059: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6srct ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 17 11:03:23.775: INFO: stderr: "I0217 11:03:22.943920 929 log.go:172] (0xc0001380b0) (0xc00066c780) Create stream\nI0217 11:03:22.944069 929 log.go:172] (0xc0001380b0) (0xc00066c780) Stream added, broadcasting: 1\nI0217 11:03:22.957836 929 log.go:172] (0xc0001380b0) Reply frame received for 1\nI0217 11:03:22.957861 929 log.go:172] (0xc0001380b0) (0xc000652b40) Create stream\nI0217 11:03:22.957868 929 log.go:172] (0xc0001380b0) (0xc000652b40) Stream added, broadcasting: 3\nI0217 11:03:22.971185 929 log.go:172] (0xc0001380b0) Reply frame received for 3\nI0217 11:03:22.971229 929 log.go:172] (0xc0001380b0) (0xc000710000) Create stream\nI0217 11:03:22.971238 929 log.go:172] (0xc0001380b0) (0xc000710000) Stream added, broadcasting: 5\nI0217 11:03:22.977568 929 log.go:172] (0xc0001380b0) Reply frame received for 5\nI0217 11:03:23.447060 929 log.go:172] (0xc0001380b0) Data frame received for 5\nI0217 11:03:23.447494 929 log.go:172] (0xc000710000) (5) Data frame handling\nI0217 11:03:23.447506 929 log.go:172] (0xc000710000) (5) Data frame sent\nmv: can't rename '/tmp/index.html': No such file or directory\nI0217 11:03:23.447523 929 log.go:172] (0xc0001380b0) Data frame received for 3\nI0217 11:03:23.447530 929 log.go:172] (0xc000652b40) (3) Data frame handling\nI0217 11:03:23.447536 929 log.go:172] (0xc000652b40) (3) Data frame sent\nI0217 11:03:23.766699 929 log.go:172] (0xc0001380b0) (0xc000652b40) Stream removed, broadcasting: 3\nI0217 11:03:23.766831 929 log.go:172] (0xc0001380b0) Data frame received for 1\nI0217 11:03:23.766850 929 log.go:172] (0xc00066c780) (1) Data frame handling\nI0217 11:03:23.766864 929 log.go:172] (0xc00066c780) (1) Data frame sent\nI0217 11:03:23.766871 929 log.go:172] (0xc0001380b0) (0xc00066c780) Stream removed, broadcasting: 1\nI0217 11:03:23.766991 929 log.go:172] (0xc0001380b0) (0xc000710000) Stream removed, broadcasting: 5\nI0217 11:03:23.767017 929 log.go:172] (0xc0001380b0) (0xc00066c780) Stream removed, broadcasting: 1\nI0217 11:03:23.767025 929 log.go:172] (0xc0001380b0) (0xc000652b40) Stream removed, broadcasting: 3\nI0217 11:03:23.767032 929 log.go:172] (0xc0001380b0) (0xc000710000) Stream removed, broadcasting: 5\n" Feb 17 11:03:23.776: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Feb 17 11:03:23.776: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Feb 17 11:03:23.803: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Feb 17 11:03:23.803: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Feb 17 11:03:23.803: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Scale down will not halt with unhealthy stateful pod Feb 17 11:03:23.823: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6srct ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Feb 17 11:03:24.352: INFO: stderr: "I0217 11:03:24.009031 951 log.go:172] (0xc00015a0b0) (0xc00089c5a0) Create stream\nI0217 11:03:24.009425 951 log.go:172] (0xc00015a0b0) (0xc00089c5a0) Stream added, broadcasting: 1\nI0217 11:03:24.021247 951 log.go:172] (0xc00015a0b0) Reply frame received for 1\nI0217 11:03:24.021329 951 log.go:172] (0xc00015a0b0) (0xc0005b6000) Create stream\nI0217 11:03:24.021340 951 log.go:172] (0xc00015a0b0) (0xc0005b6000) Stream added, broadcasting: 3\nI0217 11:03:24.023390 951 log.go:172] (0xc00015a0b0) Reply frame received for 3\nI0217 11:03:24.023456 951 log.go:172] (0xc00015a0b0) (0xc0004d2be0) Create stream\nI0217 11:03:24.023468 951 log.go:172] (0xc00015a0b0) (0xc0004d2be0) Stream added, broadcasting: 5\nI0217 11:03:24.024911 951 log.go:172] (0xc00015a0b0) Reply frame received for 5\nI0217 11:03:24.161442 951 log.go:172] (0xc00015a0b0) Data frame received for 3\nI0217 11:03:24.161591 951 log.go:172] (0xc0005b6000) (3) Data frame handling\nI0217 11:03:24.161622 951 log.go:172] (0xc0005b6000) (3) Data frame sent\nI0217 11:03:24.338407 951 log.go:172] (0xc00015a0b0) Data frame received for 1\nI0217 11:03:24.338504 951 log.go:172] (0xc00089c5a0) (1) Data frame handling\nI0217 11:03:24.338532 951 log.go:172] (0xc00089c5a0) (1) Data frame sent\nI0217 11:03:24.338571 951 log.go:172] (0xc00015a0b0) (0xc00089c5a0) Stream removed, broadcasting: 1\nI0217 11:03:24.338807 951 log.go:172] (0xc00015a0b0) (0xc0004d2be0) Stream removed, broadcasting: 5\nI0217 11:03:24.339011 951 log.go:172] (0xc00015a0b0) (0xc0005b6000) Stream removed, broadcasting: 3\nI0217 11:03:24.339184 951 log.go:172] (0xc00015a0b0) (0xc00089c5a0) Stream removed, broadcasting: 1\nI0217 11:03:24.339385 951 log.go:172] (0xc00015a0b0) (0xc0005b6000) Stream removed, broadcasting: 3\nI0217 11:03:24.339459 951 log.go:172] (0xc00015a0b0) (0xc0004d2be0) Stream removed, broadcasting: 5\nI0217 11:03:24.339699 951 log.go:172] (0xc00015a0b0) Go away received\n" Feb 17 11:03:24.353: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Feb 17 11:03:24.353: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Feb 17 11:03:24.353: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6srct ss-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Feb 17 11:03:24.892: INFO: stderr: "I0217 11:03:24.593866 972 log.go:172] (0xc00014c580) (0xc00047d2c0) Create stream\nI0217 11:03:24.594078 972 log.go:172] (0xc00014c580) (0xc00047d2c0) Stream added, broadcasting: 1\nI0217 11:03:24.600875 972 log.go:172] (0xc00014c580) Reply frame received for 1\nI0217 11:03:24.600978 972 log.go:172] (0xc00014c580) (0xc0005d8000) Create stream\nI0217 11:03:24.600997 972 log.go:172] (0xc00014c580) (0xc0005d8000) Stream added, broadcasting: 3\nI0217 11:03:24.602908 972 log.go:172] (0xc00014c580) Reply frame received for 3\nI0217 11:03:24.602971 972 log.go:172] (0xc00014c580) (0xc00061e000) Create stream\nI0217 11:03:24.603000 972 log.go:172] (0xc00014c580) (0xc00061e000) Stream added, broadcasting: 5\nI0217 11:03:24.604013 972 log.go:172] (0xc00014c580) Reply frame received for 5\nI0217 11:03:24.760424 972 log.go:172] (0xc00014c580) Data frame received for 3\nI0217 11:03:24.760767 972 log.go:172] (0xc0005d8000) (3) Data frame handling\nI0217 11:03:24.760808 972 log.go:172] (0xc0005d8000) (3) Data frame sent\nI0217 11:03:24.881787 972 log.go:172] (0xc00014c580) Data frame received for 1\nI0217 11:03:24.882161 972 log.go:172] (0xc00014c580) (0xc0005d8000) Stream removed, broadcasting: 3\nI0217 11:03:24.882502 972 log.go:172] (0xc00014c580) (0xc00061e000) Stream removed, broadcasting: 5\nI0217 11:03:24.882715 972 log.go:172] (0xc00047d2c0) (1) Data frame handling\nI0217 11:03:24.882836 972 log.go:172] (0xc00047d2c0) (1) Data frame sent\nI0217 11:03:24.882924 972 log.go:172] (0xc00014c580) (0xc00047d2c0) Stream removed, broadcasting: 1\nI0217 11:03:24.882997 972 log.go:172] (0xc00014c580) Go away received\nI0217 11:03:24.883806 972 log.go:172] (0xc00014c580) (0xc00047d2c0) Stream removed, broadcasting: 1\nI0217 11:03:24.883874 972 log.go:172] (0xc00014c580) (0xc0005d8000) Stream removed, broadcasting: 3\nI0217 11:03:24.883896 972 log.go:172] (0xc00014c580) (0xc00061e000) Stream removed, broadcasting: 5\n" Feb 17 11:03:24.892: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Feb 17 11:03:24.892: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Feb 17 11:03:24.893: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6srct ss-2 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Feb 17 11:03:25.317: INFO: stderr: "I0217 11:03:25.053711 993 log.go:172] (0xc0005980b0) (0xc000644780) Create stream\nI0217 11:03:25.053834 993 log.go:172] (0xc0005980b0) (0xc000644780) Stream added, broadcasting: 1\nI0217 11:03:25.058464 993 log.go:172] (0xc0005980b0) Reply frame received for 1\nI0217 11:03:25.058498 993 log.go:172] (0xc0005980b0) (0xc000354b40) Create stream\nI0217 11:03:25.058511 993 log.go:172] (0xc0005980b0) (0xc000354b40) Stream added, broadcasting: 3\nI0217 11:03:25.059620 993 log.go:172] (0xc0005980b0) Reply frame received for 3\nI0217 11:03:25.059641 993 log.go:172] (0xc0005980b0) (0xc0002c8000) Create stream\nI0217 11:03:25.059648 993 log.go:172] (0xc0005980b0) (0xc0002c8000) Stream added, broadcasting: 5\nI0217 11:03:25.060804 993 log.go:172] (0xc0005980b0) Reply frame received for 5\nI0217 11:03:25.184214 993 log.go:172] (0xc0005980b0) Data frame received for 3\nI0217 11:03:25.184243 993 log.go:172] (0xc000354b40) (3) Data frame handling\nI0217 11:03:25.184251 993 log.go:172] (0xc000354b40) (3) Data frame sent\nI0217 11:03:25.309074 993 log.go:172] (0xc0005980b0) Data frame received for 1\nI0217 11:03:25.309130 993 log.go:172] (0xc000644780) (1) Data frame handling\nI0217 11:03:25.309142 993 log.go:172] (0xc000644780) (1) Data frame sent\nI0217 11:03:25.309149 993 log.go:172] (0xc0005980b0) (0xc000644780) Stream removed, broadcasting: 1\nI0217 11:03:25.309322 993 log.go:172] (0xc0005980b0) (0xc000354b40) Stream removed, broadcasting: 3\nI0217 11:03:25.309349 993 log.go:172] (0xc0005980b0) (0xc0002c8000) Stream removed, broadcasting: 5\nI0217 11:03:25.309375 993 log.go:172] (0xc0005980b0) (0xc000644780) Stream removed, broadcasting: 1\nI0217 11:03:25.309390 993 log.go:172] (0xc0005980b0) (0xc000354b40) Stream removed, broadcasting: 3\nI0217 11:03:25.309398 993 log.go:172] (0xc0005980b0) (0xc0002c8000) Stream removed, broadcasting: 5\n" Feb 17 11:03:25.317: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Feb 17 11:03:25.317: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Feb 17 11:03:25.317: INFO: Waiting for statefulset status.replicas updated to 0 Feb 17 11:03:25.364: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2 Feb 17 11:03:35.386: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Feb 17 11:03:35.387: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Feb 17 11:03:35.387: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Feb 17 11:03:35.625: INFO: POD NODE PHASE GRACE CONDITIONS Feb 17 11:03:35.625: INFO: ss-0 hunter-server-hu5at5svl7ps Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-17 11:02:49 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-17 11:03:24 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-17 11:03:24 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-17 11:02:48 +0000 UTC }] Feb 17 11:03:35.626: INFO: ss-1 hunter-server-hu5at5svl7ps Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-17 11:03:09 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-17 11:03:25 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-17 11:03:25 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-17 11:03:09 +0000 UTC }] Feb 17 11:03:35.626: INFO: ss-2 hunter-server-hu5at5svl7ps Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-17 11:03:10 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-17 11:03:26 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-17 11:03:26 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-17 11:03:09 +0000 UTC }] Feb 17 11:03:35.626: INFO: Feb 17 11:03:35.626: INFO: StatefulSet ss has not reached scale 0, at 3 Feb 17 11:03:37.253: INFO: POD NODE PHASE GRACE CONDITIONS Feb 17 11:03:37.254: INFO: ss-0 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-17 11:02:49 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-17 11:03:24 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-17 11:03:24 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-17 11:02:48 +0000 UTC }] Feb 17 11:03:37.254: INFO: ss-1 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-17 11:03:09 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-17 11:03:25 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-17 11:03:25 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-17 11:03:09 +0000 UTC }] Feb 17 11:03:37.254: INFO: ss-2 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-17 11:03:10 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-17 11:03:26 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-17 11:03:26 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-17 11:03:09 +0000 UTC }] Feb 17 11:03:37.254: INFO: Feb 17 11:03:37.254: INFO: StatefulSet ss has not reached scale 0, at 3 Feb 17 11:03:38.276: INFO: POD NODE PHASE GRACE CONDITIONS Feb 17 11:03:38.276: INFO: ss-0 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-17 11:02:49 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-17 11:03:24 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-17 11:03:24 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-17 11:02:48 +0000 UTC }] Feb 17 11:03:38.276: INFO: ss-1 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-17 11:03:09 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-17 11:03:25 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-17 11:03:25 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-17 11:03:09 +0000 UTC }] Feb 17 11:03:38.276: INFO: ss-2 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-17 11:03:10 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-17 11:03:26 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-17 11:03:26 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-17 11:03:09 +0000 UTC }] Feb 17 11:03:38.276: INFO: Feb 17 11:03:38.276: INFO: StatefulSet ss has not reached scale 0, at 3 Feb 17 11:03:39.291: INFO: POD NODE PHASE GRACE CONDITIONS Feb 17 11:03:39.291: INFO: ss-0 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-17 11:02:49 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-17 11:03:24 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-17 11:03:24 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-17 11:02:48 +0000 UTC }] Feb 17 11:03:39.291: INFO: ss-1 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-17 11:03:09 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-17 11:03:25 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-17 11:03:25 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-17 11:03:09 +0000 UTC }] Feb 17 11:03:39.291: INFO: ss-2 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-17 11:03:10 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-17 11:03:26 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-17 11:03:26 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-17 11:03:09 +0000 UTC }] Feb 17 11:03:39.292: INFO: Feb 17 11:03:39.292: INFO: StatefulSet ss has not reached scale 0, at 3 Feb 17 11:03:40.575: INFO: POD NODE PHASE GRACE CONDITIONS Feb 17 11:03:40.575: INFO: ss-0 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-17 11:02:49 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-17 11:03:24 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-17 11:03:24 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-17 11:02:48 +0000 UTC }] Feb 17 11:03:40.576: INFO: ss-1 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-17 11:03:09 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-17 11:03:25 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-17 11:03:25 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-17 11:03:09 +0000 UTC }] Feb 17 11:03:40.576: INFO: ss-2 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-17 11:03:10 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-17 11:03:26 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-17 11:03:26 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-17 11:03:09 +0000 UTC }] Feb 17 11:03:40.576: INFO: Feb 17 11:03:40.576: INFO: StatefulSet ss has not reached scale 0, at 3 Feb 17 11:03:41.609: INFO: POD NODE PHASE GRACE CONDITIONS Feb 17 11:03:41.610: INFO: ss-0 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-17 11:02:49 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-17 11:03:24 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-17 11:03:24 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-17 11:02:48 +0000 UTC }] Feb 17 11:03:41.610: INFO: ss-1 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-17 11:03:09 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-17 11:03:25 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-17 11:03:25 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-17 11:03:09 +0000 UTC }] Feb 17 11:03:41.610: INFO: ss-2 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-17 11:03:10 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-17 11:03:26 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-17 11:03:26 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-17 11:03:09 +0000 UTC }] Feb 17 11:03:41.610: INFO: Feb 17 11:03:41.610: INFO: StatefulSet ss has not reached scale 0, at 3 Feb 17 11:03:42.659: INFO: POD NODE PHASE GRACE CONDITIONS Feb 17 11:03:42.660: INFO: ss-0 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-17 11:02:49 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-17 11:03:24 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-17 11:03:24 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-17 11:02:48 +0000 UTC }] Feb 17 11:03:42.660: INFO: ss-1 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-17 11:03:09 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-17 11:03:25 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-17 11:03:25 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-17 11:03:09 +0000 UTC }] Feb 17 11:03:42.660: INFO: ss-2 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-17 11:03:10 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-17 11:03:26 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-17 11:03:26 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-17 11:03:09 +0000 UTC }] Feb 17 11:03:42.660: INFO: Feb 17 11:03:42.660: INFO: StatefulSet ss has not reached scale 0, at 3 Feb 17 11:03:43.678: INFO: POD NODE PHASE GRACE CONDITIONS Feb 17 11:03:43.679: INFO: ss-0 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-17 11:02:49 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-17 11:03:24 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-17 11:03:24 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-17 11:02:48 +0000 UTC }] Feb 17 11:03:43.679: INFO: ss-1 hunter-server-hu5at5svl7ps Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-17 11:03:09 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-17 11:03:25 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-17 11:03:25 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-17 11:03:09 +0000 UTC }] Feb 17 11:03:43.679: INFO: ss-2 hunter-server-hu5at5svl7ps Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-17 11:03:10 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-17 11:03:26 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-17 11:03:26 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-17 11:03:09 +0000 UTC }] Feb 17 11:03:43.679: INFO: Feb 17 11:03:43.679: INFO: StatefulSet ss has not reached scale 0, at 3 Feb 17 11:03:44.706: INFO: POD NODE PHASE GRACE CONDITIONS Feb 17 11:03:44.707: INFO: ss-0 hunter-server-hu5at5svl7ps Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-17 11:02:49 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-17 11:03:24 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-17 11:03:24 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-17 11:02:48 +0000 UTC }] Feb 17 11:03:44.707: INFO: ss-1 hunter-server-hu5at5svl7ps Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-17 11:03:09 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-17 11:03:25 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-17 11:03:25 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-17 11:03:09 +0000 UTC }] Feb 17 11:03:44.707: INFO: ss-2 hunter-server-hu5at5svl7ps Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-17 11:03:10 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-17 11:03:26 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-17 11:03:26 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-17 11:03:09 +0000 UTC }] Feb 17 11:03:44.707: INFO: Feb 17 11:03:44.707: INFO: StatefulSet ss has not reached scale 0, at 3 STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacee2e-tests-statefulset-6srct Feb 17 11:03:45.722: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6srct ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 17 11:03:45.966: INFO: rc: 1 Feb 17 11:03:45.966: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6srct ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] error: unable to upgrade connection: container not found ("nginx") [] 0xc00056fb00 exit status 1 true [0xc0003d6ad8 0xc0003d6b70 0xc0003d6bc8] [0xc0003d6ad8 0xc0003d6b70 0xc0003d6bc8] [0xc0003d6b60 0xc0003d6b88] [0x935700 0x935700] 0xc001e47b60 }: Command stdout: stderr: error: unable to upgrade connection: container not found ("nginx") error: exit status 1 Feb 17 11:03:55.966: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6srct ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 17 11:03:56.115: INFO: rc: 1 Feb 17 11:03:56.115: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6srct ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00056fcb0 exit status 1 true [0xc0003d6bf0 0xc0003d6c08 0xc0003d6c78] [0xc0003d6bf0 0xc0003d6c08 0xc0003d6c78] [0xc0003d6c00 0xc0003d6c68] [0x935700 0x935700] 0xc001dfe240 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 17 11:04:06.116: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6srct ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 17 11:04:06.295: INFO: rc: 1 Feb 17 11:04:06.295: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6srct ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc000383b00 exit status 1 true [0xc0003d6120 0xc0003d61b8 0xc0003d6208] [0xc0003d6120 0xc0003d61b8 0xc0003d6208] [0xc0003d61a0 0xc0003d61e8] [0x935700 0x935700] 0xc001694c00 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 17 11:04:16.297: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6srct ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 17 11:04:16.508: INFO: rc: 1 Feb 17 11:04:16.508: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6srct ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00135a8a0 exit status 1 true [0xc000036030 0xc000036128 0xc0000361d0] [0xc000036030 0xc000036128 0xc0000361d0] [0xc000036120 0xc000036198] [0x935700 0x935700] 0xc0019cf080 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 17 11:04:26.509: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6srct ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 17 11:04:26.661: INFO: rc: 1 Feb 17 11:04:26.662: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6srct ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00135a9f0 exit status 1 true [0xc0000361d8 0xc000036238 0xc000036288] [0xc0000361d8 0xc000036238 0xc000036288] [0xc000036200 0xc000036268] [0x935700 0x935700] 0xc001826a80 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 17 11:04:36.663: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6srct ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 17 11:04:36.775: INFO: rc: 1 Feb 17 11:04:36.775: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6srct ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc000c281b0 exit status 1 true [0xc0019a6000 0xc0019a6018 0xc0019a6030] [0xc0019a6000 0xc0019a6018 0xc0019a6030] [0xc0019a6010 0xc0019a6028] [0x935700 0x935700] 0xc001c3b0e0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 17 11:04:46.776: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6srct ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 17 11:04:46.941: INFO: rc: 1 Feb 17 11:04:46.941: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6srct ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0010d0120 exit status 1 true [0xc00187e000 0xc00187e018 0xc00187e030] [0xc00187e000 0xc00187e018 0xc00187e030] [0xc00187e010 0xc00187e028] [0x935700 0x935700] 0xc0016ce480 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 17 11:04:56.942: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6srct ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 17 11:04:57.093: INFO: rc: 1 Feb 17 11:04:57.093: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6srct ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00135ab70 exit status 1 true [0xc0000362a8 0xc000036348 0xc0000363b8] [0xc0000362a8 0xc000036348 0xc0000363b8] [0xc000036328 0xc000036390] [0x935700 0x935700] 0xc001827020 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 17 11:05:07.094: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6srct ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 17 11:05:07.199: INFO: rc: 1 Feb 17 11:05:07.200: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6srct ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0010d0270 exit status 1 true [0xc00187e038 0xc00187e050 0xc00187e068] [0xc00187e038 0xc00187e050 0xc00187e068] [0xc00187e048 0xc00187e060] [0x935700 0x935700] 0xc0016ceae0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 17 11:05:17.201: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6srct ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 17 11:05:17.383: INFO: rc: 1 Feb 17 11:05:17.384: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6srct ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0010d03c0 exit status 1 true [0xc00187e070 0xc00187e088 0xc00187e0a0] [0xc00187e070 0xc00187e088 0xc00187e0a0] [0xc00187e080 0xc00187e098] [0x935700 0x935700] 0xc0016cede0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 17 11:05:27.385: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6srct ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 17 11:05:27.541: INFO: rc: 1 Feb 17 11:05:27.541: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6srct ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00135ac90 exit status 1 true [0xc0000363d8 0xc000036458 0xc000036478] [0xc0000363d8 0xc000036458 0xc000036478] [0xc000036450 0xc000036470] [0x935700 0x935700] 0xc0018274a0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 17 11:05:37.542: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6srct ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 17 11:05:37.689: INFO: rc: 1 Feb 17 11:05:37.690: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6srct ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc000c28300 exit status 1 true [0xc0019a6038 0xc0019a6050 0xc0019a6068] [0xc0019a6038 0xc0019a6050 0xc0019a6068] [0xc0019a6048 0xc0019a6060] [0x935700 0x935700] 0xc00146c600 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 17 11:05:47.691: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6srct ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 17 11:05:47.856: INFO: rc: 1 Feb 17 11:05:47.857: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6srct ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0010d0540 exit status 1 true [0xc00187e0a8 0xc00187e0c0 0xc00187e0f0] [0xc00187e0a8 0xc00187e0c0 0xc00187e0f0] [0xc00187e0b8 0xc00187e0d8] [0x935700 0x935700] 0xc0016cf2c0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 17 11:05:57.858: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6srct ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 17 11:05:58.025: INFO: rc: 1 Feb 17 11:05:58.025: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6srct ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0010d0660 exit status 1 true [0xc00187e0f8 0xc00187e150 0xc00187e178] [0xc00187e0f8 0xc00187e150 0xc00187e178] [0xc00187e130 0xc00187e170] [0x935700 0x935700] 0xc0016cf560 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 17 11:06:08.027: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6srct ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 17 11:06:08.612: INFO: rc: 1 Feb 17 11:06:08.612: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6srct ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00135ade0 exit status 1 true [0xc000036510 0xc000036590 0xc0000365c8] [0xc000036510 0xc000036590 0xc0000365c8] [0xc000036578 0xc0000365c0] [0x935700 0x935700] 0xc0018279e0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 17 11:06:18.613: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6srct ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 17 11:06:18.714: INFO: rc: 1 Feb 17 11:06:18.715: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6srct ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc000383b60 exit status 1 true [0xc0019a6000 0xc0019a6018 0xc0019a6030] [0xc0019a6000 0xc0019a6018 0xc0019a6030] [0xc0019a6010 0xc0019a6028] [0x935700 0x935700] 0xc001c3b0e0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 17 11:06:28.716: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6srct ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 17 11:06:28.813: INFO: rc: 1 Feb 17 11:06:28.813: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6srct ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0010d0150 exit status 1 true [0xc0003d60c8 0xc0003d61a0 0xc0003d61e8] [0xc0003d60c8 0xc0003d61a0 0xc0003d61e8] [0xc0003d6128 0xc0003d61d8] [0x935700 0x935700] 0xc0019cf080 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 17 11:06:38.814: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6srct ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 17 11:06:38.987: INFO: rc: 1 Feb 17 11:06:38.988: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6srct ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0010d02d0 exit status 1 true [0xc0003d6208 0xc0003d6260 0xc0003d6290] [0xc0003d6208 0xc0003d6260 0xc0003d6290] [0xc0003d6238 0xc0003d6288] [0x935700 0x935700] 0xc00146c5a0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 17 11:06:48.988: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6srct ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 17 11:06:49.141: INFO: rc: 1 Feb 17 11:06:49.141: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6srct ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0010d0420 exit status 1 true [0xc0003d62a0 0xc0003d62c0 0xc0003d6338] [0xc0003d62a0 0xc0003d62c0 0xc0003d6338] [0xc0003d62b8 0xc0003d6310] [0x935700 0x935700] 0xc00146d9e0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 17 11:06:59.142: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6srct ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 17 11:06:59.305: INFO: rc: 1 Feb 17 11:06:59.306: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6srct ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0010d05a0 exit status 1 true [0xc0003d6340 0xc0003d6360 0xc0003d63c0] [0xc0003d6340 0xc0003d6360 0xc0003d63c0] [0xc0003d6358 0xc0003d63b0] [0x935700 0x935700] 0xc001694c00 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 17 11:07:09.307: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6srct ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 17 11:07:09.430: INFO: rc: 1 Feb 17 11:07:09.431: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6srct ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc000c28210 exit status 1 true [0xc00187e000 0xc00187e018 0xc00187e030] [0xc00187e000 0xc00187e018 0xc00187e030] [0xc00187e010 0xc00187e028] [0x935700 0x935700] 0xc0016ce480 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 17 11:07:19.432: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6srct ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 17 11:07:19.546: INFO: rc: 1 Feb 17 11:07:19.546: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6srct ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0010d0720 exit status 1 true [0xc0003d63e8 0xc0003d6438 0xc0003d6470] [0xc0003d63e8 0xc0003d6438 0xc0003d6470] [0xc0003d6418 0xc0003d6460] [0x935700 0x935700] 0xc0016953e0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 17 11:07:29.547: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6srct ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 17 11:07:29.679: INFO: rc: 1 Feb 17 11:07:29.680: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6srct ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc000383c80 exit status 1 true [0xc0019a6038 0xc0019a6050 0xc0019a6068] [0xc0019a6038 0xc0019a6050 0xc0019a6068] [0xc0019a6048 0xc0019a6060] [0x935700 0x935700] 0xc001826ae0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 17 11:07:39.680: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6srct ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 17 11:07:40.195: INFO: rc: 1 Feb 17 11:07:40.195: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6srct ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc000c284b0 exit status 1 true [0xc00187e038 0xc00187e050 0xc00187e068] [0xc00187e038 0xc00187e050 0xc00187e068] [0xc00187e048 0xc00187e060] [0x935700 0x935700] 0xc0016ceae0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 17 11:07:50.196: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6srct ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 17 11:07:50.341: INFO: rc: 1 Feb 17 11:07:50.341: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6srct ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00135a930 exit status 1 true [0xc000036030 0xc000036128 0xc0000361d0] [0xc000036030 0xc000036128 0xc0000361d0] [0xc000036120 0xc000036198] [0x935700 0x935700] 0xc001e474a0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 17 11:08:00.342: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6srct ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 17 11:08:00.523: INFO: rc: 1 Feb 17 11:08:00.524: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6srct ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0010d08a0 exit status 1 true [0xc0003d64b0 0xc0003d64f0 0xc0003d6550] [0xc0003d64b0 0xc0003d64f0 0xc0003d6550] [0xc0003d64e0 0xc0003d6520] [0x935700 0x935700] 0xc001695d40 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 17 11:08:10.526: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6srct ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 17 11:08:10.675: INFO: rc: 1 Feb 17 11:08:10.675: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6srct ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0010d0120 exit status 1 true [0xc0003d6120 0xc0003d61b8 0xc0003d6208] [0xc0003d6120 0xc0003d61b8 0xc0003d6208] [0xc0003d61a0 0xc0003d61e8] [0x935700 0x935700] 0xc001694c00 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 17 11:08:20.676: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6srct ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 17 11:08:20.768: INFO: rc: 1 Feb 17 11:08:20.768: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6srct ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0010d0270 exit status 1 true [0xc0003d6230 0xc0003d6270 0xc0003d62a0] [0xc0003d6230 0xc0003d6270 0xc0003d62a0] [0xc0003d6260 0xc0003d6290] [0x935700 0x935700] 0xc0016953e0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 17 11:08:30.769: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6srct ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 17 11:08:30.963: INFO: rc: 1 Feb 17 11:08:30.963: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6srct ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc000c281b0 exit status 1 true [0xc00187e000 0xc00187e018 0xc00187e030] [0xc00187e000 0xc00187e018 0xc00187e030] [0xc00187e010 0xc00187e028] [0x935700 0x935700] 0xc0019cf080 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 17 11:08:40.964: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6srct ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 17 11:08:41.163: INFO: rc: 1 Feb 17 11:08:41.163: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6srct ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc000383b00 exit status 1 true [0xc0019a6000 0xc0019a6018 0xc0019a6030] [0xc0019a6000 0xc0019a6018 0xc0019a6030] [0xc0019a6010 0xc0019a6028] [0x935700 0x935700] 0xc001c3b0e0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 17 11:08:51.164: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6srct ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 17 11:08:51.291: INFO: rc: 1 Feb 17 11:08:51.292: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: Feb 17 11:08:51.292: INFO: Scaling statefulset ss to 0 Feb 17 11:08:51.354: INFO: Waiting for statefulset status.replicas updated to 0 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85 Feb 17 11:08:51.358: INFO: Deleting all statefulset in ns e2e-tests-statefulset-6srct Feb 17 11:08:51.391: INFO: Scaling statefulset ss to 0 Feb 17 11:08:51.405: INFO: Waiting for statefulset status.replicas updated to 0 Feb 17 11:08:51.408: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 17 11:08:51.456: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-statefulset-6srct" for this suite. Feb 17 11:08:59.554: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 17 11:08:59.671: INFO: namespace: e2e-tests-statefulset-6srct, resource: bindings, ignored listing per whitelist Feb 17 11:08:59.700: INFO: namespace e2e-tests-statefulset-6srct deletion completed in 8.237638362s • [SLOW TEST:371.273 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 17 11:08:59.700: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with configMap that has name projected-configmap-test-upd-e5f8d22f-5175-11ea-a180-0242ac110008 STEP: Creating the pod STEP: Updating configmap projected-configmap-test-upd-e5f8d22f-5175-11ea-a180-0242ac110008 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 17 11:09:10.140: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-twnl9" for this suite. Feb 17 11:09:34.197: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 17 11:09:34.262: INFO: namespace: e2e-tests-projected-twnl9, resource: bindings, ignored listing per whitelist Feb 17 11:09:34.335: INFO: namespace e2e-tests-projected-twnl9 deletion completed in 24.184120301s • [SLOW TEST:34.634 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Service endpoints latency should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 17 11:09:34.336: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svc-latency STEP: Waiting for a default service account to be provisioned in namespace [It] should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating replication controller svc-latency-rc in namespace e2e-tests-svc-latency-d25k6 I0217 11:09:34.759246 8 runners.go:184] Created replication controller with name: svc-latency-rc, namespace: e2e-tests-svc-latency-d25k6, replica count: 1 I0217 11:09:35.810587 8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0217 11:09:36.811119 8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0217 11:09:37.811661 8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0217 11:09:38.812307 8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0217 11:09:39.813214 8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0217 11:09:40.813992 8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0217 11:09:41.814793 8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0217 11:09:42.815610 8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0217 11:09:43.816849 8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0217 11:09:44.817422 8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Feb 17 11:09:44.963: INFO: Created: latency-svc-v5xpm Feb 17 11:09:45.004: INFO: Got endpoints: latency-svc-v5xpm [86.101663ms] Feb 17 11:09:45.111: INFO: Created: latency-svc-r75jf Feb 17 11:09:45.138: INFO: Got endpoints: latency-svc-r75jf [132.669464ms] Feb 17 11:09:45.274: INFO: Created: latency-svc-rxpfv Feb 17 11:09:45.307: INFO: Created: latency-svc-zp547 Feb 17 11:09:45.319: INFO: Got endpoints: latency-svc-rxpfv [314.100555ms] Feb 17 11:09:45.346: INFO: Got endpoints: latency-svc-zp547 [207.837548ms] Feb 17 11:09:45.517: INFO: Created: latency-svc-67r8g Feb 17 11:09:45.520: INFO: Got endpoints: latency-svc-67r8g [514.213638ms] Feb 17 11:09:45.718: INFO: Created: latency-svc-cwdkr Feb 17 11:09:45.760: INFO: Got endpoints: latency-svc-cwdkr [755.764058ms] Feb 17 11:09:45.908: INFO: Created: latency-svc-gsg8l Feb 17 11:09:45.931: INFO: Got endpoints: latency-svc-gsg8l [925.449662ms] Feb 17 11:09:45.992: INFO: Created: latency-svc-n7g5q Feb 17 11:09:46.002: INFO: Got endpoints: latency-svc-n7g5q [996.268431ms] Feb 17 11:09:46.110: INFO: Created: latency-svc-z5zgm Feb 17 11:09:46.115: INFO: Got endpoints: latency-svc-z5zgm [1.110398769s] Feb 17 11:09:46.166: INFO: Created: latency-svc-mssgp Feb 17 11:09:46.173: INFO: Got endpoints: latency-svc-mssgp [1.167507721s] Feb 17 11:09:46.356: INFO: Created: latency-svc-2fw5f Feb 17 11:09:46.386: INFO: Got endpoints: latency-svc-2fw5f [1.379262169s] Feb 17 11:09:46.623: INFO: Created: latency-svc-zdn6x Feb 17 11:09:46.690: INFO: Got endpoints: latency-svc-zdn6x [1.683708359s] Feb 17 11:09:46.911: INFO: Created: latency-svc-26p5l Feb 17 11:09:46.912: INFO: Got endpoints: latency-svc-26p5l [1.906022106s] Feb 17 11:09:47.031: INFO: Created: latency-svc-2zjmh Feb 17 11:09:47.039: INFO: Got endpoints: latency-svc-2zjmh [2.034186425s] Feb 17 11:09:47.090: INFO: Created: latency-svc-kft5f Feb 17 11:09:47.101: INFO: Got endpoints: latency-svc-kft5f [2.094918765s] Feb 17 11:09:47.225: INFO: Created: latency-svc-68l9q Feb 17 11:09:47.245: INFO: Got endpoints: latency-svc-68l9q [2.238789997s] Feb 17 11:09:47.288: INFO: Created: latency-svc-52sxh Feb 17 11:09:47.507: INFO: Got endpoints: latency-svc-52sxh [2.500634427s] Feb 17 11:09:47.559: INFO: Created: latency-svc-rxwvz Feb 17 11:09:47.582: INFO: Got endpoints: latency-svc-rxwvz [2.263143978s] Feb 17 11:09:47.725: INFO: Created: latency-svc-trt59 Feb 17 11:09:47.732: INFO: Got endpoints: latency-svc-trt59 [2.385323872s] Feb 17 11:09:47.784: INFO: Created: latency-svc-22frj Feb 17 11:09:47.795: INFO: Got endpoints: latency-svc-22frj [2.27425401s] Feb 17 11:09:47.981: INFO: Created: latency-svc-hjcx5 Feb 17 11:09:48.000: INFO: Got endpoints: latency-svc-hjcx5 [2.239166484s] Feb 17 11:09:48.035: INFO: Created: latency-svc-pwg9k Feb 17 11:09:48.043: INFO: Got endpoints: latency-svc-pwg9k [2.111353808s] Feb 17 11:09:48.177: INFO: Created: latency-svc-n5dtq Feb 17 11:09:48.200: INFO: Got endpoints: latency-svc-n5dtq [2.197798772s] Feb 17 11:09:48.267: INFO: Created: latency-svc-bqk6j Feb 17 11:09:48.458: INFO: Got endpoints: latency-svc-bqk6j [2.342281809s] Feb 17 11:09:48.679: INFO: Created: latency-svc-ddhrs Feb 17 11:09:48.715: INFO: Got endpoints: latency-svc-ddhrs [2.541832486s] Feb 17 11:09:48.925: INFO: Created: latency-svc-z5hfs Feb 17 11:09:48.946: INFO: Got endpoints: latency-svc-z5hfs [2.560247385s] Feb 17 11:09:48.974: INFO: Created: latency-svc-cwrgp Feb 17 11:09:49.136: INFO: Got endpoints: latency-svc-cwrgp [2.445776292s] Feb 17 11:09:49.156: INFO: Created: latency-svc-khqgn Feb 17 11:09:49.205: INFO: Got endpoints: latency-svc-khqgn [2.293039687s] Feb 17 11:09:49.211: INFO: Created: latency-svc-xx798 Feb 17 11:09:49.329: INFO: Got endpoints: latency-svc-xx798 [2.289692297s] Feb 17 11:09:49.397: INFO: Created: latency-svc-fr5nx Feb 17 11:09:49.419: INFO: Got endpoints: latency-svc-fr5nx [2.317241019s] Feb 17 11:09:49.586: INFO: Created: latency-svc-zbsvc Feb 17 11:09:49.632: INFO: Got endpoints: latency-svc-zbsvc [2.386987708s] Feb 17 11:09:49.645: INFO: Created: latency-svc-sr97r Feb 17 11:09:49.771: INFO: Got endpoints: latency-svc-sr97r [2.263803723s] Feb 17 11:09:49.811: INFO: Created: latency-svc-424dg Feb 17 11:09:49.841: INFO: Got endpoints: latency-svc-424dg [2.258719535s] Feb 17 11:09:49.973: INFO: Created: latency-svc-4prww Feb 17 11:09:49.994: INFO: Got endpoints: latency-svc-4prww [2.262035858s] Feb 17 11:09:50.041: INFO: Created: latency-svc-rgmfj Feb 17 11:09:50.228: INFO: Got endpoints: latency-svc-rgmfj [2.433105825s] Feb 17 11:09:50.253: INFO: Created: latency-svc-tbhvq Feb 17 11:09:50.278: INFO: Got endpoints: latency-svc-tbhvq [2.277692799s] Feb 17 11:09:50.400: INFO: Created: latency-svc-j5v9p Feb 17 11:09:50.445: INFO: Got endpoints: latency-svc-j5v9p [2.402047638s] Feb 17 11:09:50.460: INFO: Created: latency-svc-ksxms Feb 17 11:09:50.662: INFO: Got endpoints: latency-svc-ksxms [2.462020563s] Feb 17 11:09:50.680: INFO: Created: latency-svc-cknnt Feb 17 11:09:50.707: INFO: Got endpoints: latency-svc-cknnt [2.24856383s] Feb 17 11:09:50.904: INFO: Created: latency-svc-z9sww Feb 17 11:09:50.907: INFO: Got endpoints: latency-svc-z9sww [2.19115073s] Feb 17 11:09:51.113: INFO: Created: latency-svc-zd72p Feb 17 11:09:51.125: INFO: Got endpoints: latency-svc-zd72p [2.178985374s] Feb 17 11:09:51.143: INFO: Created: latency-svc-4nnct Feb 17 11:09:51.245: INFO: Got endpoints: latency-svc-4nnct [2.108740668s] Feb 17 11:09:51.260: INFO: Created: latency-svc-p9pcl Feb 17 11:09:51.281: INFO: Got endpoints: latency-svc-p9pcl [2.075761416s] Feb 17 11:09:51.329: INFO: Created: latency-svc-4b7nk Feb 17 11:09:51.438: INFO: Got endpoints: latency-svc-4b7nk [2.108718453s] Feb 17 11:09:51.453: INFO: Created: latency-svc-j9mj9 Feb 17 11:09:51.492: INFO: Got endpoints: latency-svc-j9mj9 [2.073112389s] Feb 17 11:09:51.671: INFO: Created: latency-svc-4n5jz Feb 17 11:09:51.676: INFO: Got endpoints: latency-svc-4n5jz [2.043189152s] Feb 17 11:09:51.760: INFO: Created: latency-svc-58wz8 Feb 17 11:09:51.874: INFO: Got endpoints: latency-svc-58wz8 [2.101760057s] Feb 17 11:09:51.923: INFO: Created: latency-svc-4nw6v Feb 17 11:09:51.936: INFO: Got endpoints: latency-svc-4nw6v [2.095185513s] Feb 17 11:09:51.971: INFO: Created: latency-svc-tvtzw Feb 17 11:09:52.107: INFO: Got endpoints: latency-svc-tvtzw [2.112116329s] Feb 17 11:09:52.144: INFO: Created: latency-svc-f56cl Feb 17 11:09:52.157: INFO: Got endpoints: latency-svc-f56cl [1.928460099s] Feb 17 11:09:52.203: INFO: Created: latency-svc-bvnwt Feb 17 11:09:52.320: INFO: Got endpoints: latency-svc-bvnwt [2.042470427s] Feb 17 11:09:52.351: INFO: Created: latency-svc-cwpd9 Feb 17 11:09:52.382: INFO: Got endpoints: latency-svc-cwpd9 [1.936286968s] Feb 17 11:09:52.539: INFO: Created: latency-svc-vc4pr Feb 17 11:09:52.592: INFO: Got endpoints: latency-svc-vc4pr [1.929265573s] Feb 17 11:09:52.760: INFO: Created: latency-svc-r28n6 Feb 17 11:09:52.771: INFO: Got endpoints: latency-svc-r28n6 [2.06326161s] Feb 17 11:09:52.808: INFO: Created: latency-svc-5wc7w Feb 17 11:09:52.819: INFO: Got endpoints: latency-svc-5wc7w [1.912268688s] Feb 17 11:09:52.966: INFO: Created: latency-svc-lrzl9 Feb 17 11:09:53.004: INFO: Got endpoints: latency-svc-lrzl9 [1.878258607s] Feb 17 11:09:53.172: INFO: Created: latency-svc-nh89b Feb 17 11:09:53.173: INFO: Got endpoints: latency-svc-nh89b [1.927304545s] Feb 17 11:09:53.233: INFO: Created: latency-svc-fcnl7 Feb 17 11:09:53.408: INFO: Got endpoints: latency-svc-fcnl7 [2.12678393s] Feb 17 11:09:53.495: INFO: Created: latency-svc-d64qw Feb 17 11:09:53.637: INFO: Got endpoints: latency-svc-d64qw [2.199063798s] Feb 17 11:09:53.665: INFO: Created: latency-svc-jw7rb Feb 17 11:09:53.692: INFO: Got endpoints: latency-svc-jw7rb [2.199468079s] Feb 17 11:09:53.839: INFO: Created: latency-svc-vz8ns Feb 17 11:09:53.857: INFO: Got endpoints: latency-svc-vz8ns [2.181479221s] Feb 17 11:09:54.057: INFO: Created: latency-svc-8hxpx Feb 17 11:09:54.138: INFO: Got endpoints: latency-svc-8hxpx [2.263640412s] Feb 17 11:09:54.519: INFO: Created: latency-svc-zx4r2 Feb 17 11:09:54.611: INFO: Got endpoints: latency-svc-zx4r2 [2.674969011s] Feb 17 11:09:54.869: INFO: Created: latency-svc-8mzq7 Feb 17 11:09:54.883: INFO: Got endpoints: latency-svc-8mzq7 [2.77544517s] Feb 17 11:09:55.164: INFO: Created: latency-svc-752dx Feb 17 11:09:55.268: INFO: Got endpoints: latency-svc-752dx [3.110595658s] Feb 17 11:09:55.280: INFO: Created: latency-svc-xjzjz Feb 17 11:09:55.290: INFO: Got endpoints: latency-svc-xjzjz [2.969551586s] Feb 17 11:09:55.391: INFO: Created: latency-svc-mffgs Feb 17 11:09:55.586: INFO: Got endpoints: latency-svc-mffgs [3.203564053s] Feb 17 11:09:55.669: INFO: Created: latency-svc-5crmg Feb 17 11:09:55.827: INFO: Got endpoints: latency-svc-5crmg [3.235049359s] Feb 17 11:09:55.850: INFO: Created: latency-svc-mflv7 Feb 17 11:09:55.892: INFO: Got endpoints: latency-svc-mflv7 [3.121439665s] Feb 17 11:09:56.083: INFO: Created: latency-svc-vspws Feb 17 11:09:56.123: INFO: Got endpoints: latency-svc-vspws [3.303258039s] Feb 17 11:09:56.300: INFO: Created: latency-svc-h2bzz Feb 17 11:09:56.308: INFO: Got endpoints: latency-svc-h2bzz [3.302474354s] Feb 17 11:09:56.374: INFO: Created: latency-svc-fr88s Feb 17 11:09:56.526: INFO: Got endpoints: latency-svc-fr88s [3.353104161s] Feb 17 11:09:56.571: INFO: Created: latency-svc-xqcmt Feb 17 11:09:56.759: INFO: Got endpoints: latency-svc-xqcmt [3.350498464s] Feb 17 11:09:56.797: INFO: Created: latency-svc-thgn2 Feb 17 11:09:56.815: INFO: Got endpoints: latency-svc-thgn2 [3.176689093s] Feb 17 11:09:56.963: INFO: Created: latency-svc-hzspx Feb 17 11:09:56.975: INFO: Got endpoints: latency-svc-hzspx [3.282759085s] Feb 17 11:09:57.032: INFO: Created: latency-svc-5vvnw Feb 17 11:09:57.117: INFO: Got endpoints: latency-svc-5vvnw [3.259828635s] Feb 17 11:09:57.155: INFO: Created: latency-svc-vvms2 Feb 17 11:09:57.168: INFO: Got endpoints: latency-svc-vvms2 [3.029909607s] Feb 17 11:09:57.309: INFO: Created: latency-svc-2lh4p Feb 17 11:09:57.320: INFO: Got endpoints: latency-svc-2lh4p [2.707918436s] Feb 17 11:09:57.396: INFO: Created: latency-svc-drjnh Feb 17 11:09:57.639: INFO: Created: latency-svc-4t7hq Feb 17 11:09:57.638: INFO: Got endpoints: latency-svc-drjnh [2.755572205s] Feb 17 11:09:57.830: INFO: Got endpoints: latency-svc-4t7hq [2.562483209s] Feb 17 11:09:57.852: INFO: Created: latency-svc-fz6x7 Feb 17 11:09:57.864: INFO: Got endpoints: latency-svc-fz6x7 [2.574040298s] Feb 17 11:09:58.103: INFO: Created: latency-svc-9k5xg Feb 17 11:09:58.107: INFO: Got endpoints: latency-svc-9k5xg [2.520050405s] Feb 17 11:09:58.358: INFO: Created: latency-svc-fpf8m Feb 17 11:09:58.601: INFO: Got endpoints: latency-svc-fpf8m [2.773660575s] Feb 17 11:09:58.783: INFO: Created: latency-svc-gns4q Feb 17 11:09:58.804: INFO: Got endpoints: latency-svc-gns4q [2.911707587s] Feb 17 11:09:58.987: INFO: Created: latency-svc-qkb74 Feb 17 11:09:58.989: INFO: Got endpoints: latency-svc-qkb74 [2.866077941s] Feb 17 11:09:58.993: INFO: Created: latency-svc-2njl6 Feb 17 11:09:59.014: INFO: Got endpoints: latency-svc-2njl6 [2.706252468s] Feb 17 11:09:59.073: INFO: Created: latency-svc-vxwmm Feb 17 11:09:59.223: INFO: Got endpoints: latency-svc-vxwmm [2.696414194s] Feb 17 11:09:59.292: INFO: Created: latency-svc-t9cbc Feb 17 11:09:59.455: INFO: Got endpoints: latency-svc-t9cbc [2.695114328s] Feb 17 11:09:59.472: INFO: Created: latency-svc-742sz Feb 17 11:09:59.489: INFO: Got endpoints: latency-svc-742sz [2.67436371s] Feb 17 11:09:59.662: INFO: Created: latency-svc-n9drt Feb 17 11:09:59.674: INFO: Got endpoints: latency-svc-n9drt [2.698783926s] Feb 17 11:09:59.726: INFO: Created: latency-svc-tg4xl Feb 17 11:09:59.736: INFO: Got endpoints: latency-svc-tg4xl [2.618587374s] Feb 17 11:09:59.837: INFO: Created: latency-svc-x6j27 Feb 17 11:09:59.861: INFO: Got endpoints: latency-svc-x6j27 [2.692376989s] Feb 17 11:10:00.006: INFO: Created: latency-svc-gjbxq Feb 17 11:10:00.019: INFO: Got endpoints: latency-svc-gjbxq [2.698675216s] Feb 17 11:10:00.068: INFO: Created: latency-svc-fvwr2 Feb 17 11:10:00.206: INFO: Got endpoints: latency-svc-fvwr2 [2.567405242s] Feb 17 11:10:00.230: INFO: Created: latency-svc-5k7s4 Feb 17 11:10:00.261: INFO: Got endpoints: latency-svc-5k7s4 [2.430072892s] Feb 17 11:10:00.446: INFO: Created: latency-svc-v5c7x Feb 17 11:10:00.477: INFO: Got endpoints: latency-svc-v5c7x [2.612354284s] Feb 17 11:10:00.626: INFO: Created: latency-svc-f92sm Feb 17 11:10:00.674: INFO: Got endpoints: latency-svc-f92sm [2.566956483s] Feb 17 11:10:00.803: INFO: Created: latency-svc-p2blq Feb 17 11:10:00.833: INFO: Got endpoints: latency-svc-p2blq [2.231290435s] Feb 17 11:10:00.967: INFO: Created: latency-svc-66bx4 Feb 17 11:10:00.987: INFO: Got endpoints: latency-svc-66bx4 [2.182854207s] Feb 17 11:10:01.078: INFO: Created: latency-svc-xgl52 Feb 17 11:10:01.272: INFO: Got endpoints: latency-svc-xgl52 [2.282706533s] Feb 17 11:10:01.536: INFO: Created: latency-svc-rh9zt Feb 17 11:10:01.748: INFO: Got endpoints: latency-svc-rh9zt [2.73344191s] Feb 17 11:10:01.790: INFO: Created: latency-svc-z89c9 Feb 17 11:10:01.791: INFO: Got endpoints: latency-svc-z89c9 [2.567294421s] Feb 17 11:10:01.959: INFO: Created: latency-svc-76lcr Feb 17 11:10:02.135: INFO: Got endpoints: latency-svc-76lcr [2.680518285s] Feb 17 11:10:02.297: INFO: Created: latency-svc-dh64v Feb 17 11:10:02.342: INFO: Created: latency-svc-cfdng Feb 17 11:10:02.342: INFO: Got endpoints: latency-svc-dh64v [2.852766203s] Feb 17 11:10:02.365: INFO: Got endpoints: latency-svc-cfdng [2.691035185s] Feb 17 11:10:02.585: INFO: Created: latency-svc-jdgll Feb 17 11:10:02.593: INFO: Got endpoints: latency-svc-jdgll [2.857076437s] Feb 17 11:10:02.699: INFO: Created: latency-svc-fdz69 Feb 17 11:10:02.712: INFO: Got endpoints: latency-svc-fdz69 [2.851116802s] Feb 17 11:10:02.752: INFO: Created: latency-svc-x2876 Feb 17 11:10:02.759: INFO: Got endpoints: latency-svc-x2876 [2.740478292s] Feb 17 11:10:02.925: INFO: Created: latency-svc-t7ck4 Feb 17 11:10:02.952: INFO: Got endpoints: latency-svc-t7ck4 [2.745014344s] Feb 17 11:10:03.006: INFO: Created: latency-svc-llq2m Feb 17 11:10:03.109: INFO: Got endpoints: latency-svc-llq2m [2.847571727s] Feb 17 11:10:03.126: INFO: Created: latency-svc-vxsn4 Feb 17 11:10:03.145: INFO: Got endpoints: latency-svc-vxsn4 [2.667376389s] Feb 17 11:10:03.189: INFO: Created: latency-svc-x9mcr Feb 17 11:10:03.199: INFO: Got endpoints: latency-svc-x9mcr [2.524699541s] Feb 17 11:10:03.364: INFO: Created: latency-svc-hbjq7 Feb 17 11:10:03.381: INFO: Got endpoints: latency-svc-hbjq7 [2.547604082s] Feb 17 11:10:03.438: INFO: Created: latency-svc-9hj9l Feb 17 11:10:03.641: INFO: Got endpoints: latency-svc-9hj9l [2.653155384s] Feb 17 11:10:03.709: INFO: Created: latency-svc-v8c89 Feb 17 11:10:03.834: INFO: Got endpoints: latency-svc-v8c89 [2.5614297s] Feb 17 11:10:03.911: INFO: Created: latency-svc-64xmp Feb 17 11:10:04.026: INFO: Got endpoints: latency-svc-64xmp [2.278463988s] Feb 17 11:10:04.050: INFO: Created: latency-svc-758v9 Feb 17 11:10:04.084: INFO: Got endpoints: latency-svc-758v9 [2.293683738s] Feb 17 11:10:04.266: INFO: Created: latency-svc-5w92v Feb 17 11:10:04.300: INFO: Got endpoints: latency-svc-5w92v [2.164278948s] Feb 17 11:10:04.339: INFO: Created: latency-svc-hsf5g Feb 17 11:10:04.429: INFO: Got endpoints: latency-svc-hsf5g [2.086990657s] Feb 17 11:10:04.466: INFO: Created: latency-svc-rcv49 Feb 17 11:10:04.496: INFO: Got endpoints: latency-svc-rcv49 [2.13007194s] Feb 17 11:10:04.743: INFO: Created: latency-svc-m67ds Feb 17 11:10:04.913: INFO: Got endpoints: latency-svc-m67ds [2.319588946s] Feb 17 11:10:04.998: INFO: Created: latency-svc-cm89v Feb 17 11:10:05.104: INFO: Got endpoints: latency-svc-cm89v [2.391263019s] Feb 17 11:10:05.148: INFO: Created: latency-svc-5z4pd Feb 17 11:10:05.156: INFO: Got endpoints: latency-svc-5z4pd [2.396268219s] Feb 17 11:10:05.299: INFO: Created: latency-svc-c5l76 Feb 17 11:10:05.351: INFO: Got endpoints: latency-svc-c5l76 [2.398155364s] Feb 17 11:10:05.354: INFO: Created: latency-svc-gv47t Feb 17 11:10:05.366: INFO: Got endpoints: latency-svc-gv47t [2.256547966s] Feb 17 11:10:05.454: INFO: Created: latency-svc-ddtr6 Feb 17 11:10:05.468: INFO: Got endpoints: latency-svc-ddtr6 [2.323412848s] Feb 17 11:10:05.650: INFO: Created: latency-svc-b8k77 Feb 17 11:10:05.657: INFO: Got endpoints: latency-svc-b8k77 [2.457862422s] Feb 17 11:10:05.691: INFO: Created: latency-svc-hm58j Feb 17 11:10:05.708: INFO: Got endpoints: latency-svc-hm58j [2.325811558s] Feb 17 11:10:05.827: INFO: Created: latency-svc-4phxl Feb 17 11:10:05.844: INFO: Got endpoints: latency-svc-4phxl [2.202788032s] Feb 17 11:10:06.051: INFO: Created: latency-svc-fjt48 Feb 17 11:10:06.064: INFO: Got endpoints: latency-svc-fjt48 [2.229735407s] Feb 17 11:10:06.386: INFO: Created: latency-svc-lfwf4 Feb 17 11:10:06.400: INFO: Got endpoints: latency-svc-lfwf4 [2.373794022s] Feb 17 11:10:06.456: INFO: Created: latency-svc-t4kv5 Feb 17 11:10:06.626: INFO: Got endpoints: latency-svc-t4kv5 [2.541177727s] Feb 17 11:10:06.684: INFO: Created: latency-svc-2h9l5 Feb 17 11:10:06.685: INFO: Got endpoints: latency-svc-2h9l5 [2.384104287s] Feb 17 11:10:06.830: INFO: Created: latency-svc-ls5jx Feb 17 11:10:06.894: INFO: Got endpoints: latency-svc-ls5jx [2.464697534s] Feb 17 11:10:06.909: INFO: Created: latency-svc-bjfgl Feb 17 11:10:07.043: INFO: Got endpoints: latency-svc-bjfgl [2.546888743s] Feb 17 11:10:07.073: INFO: Created: latency-svc-pwx97 Feb 17 11:10:07.083: INFO: Got endpoints: latency-svc-pwx97 [2.168734606s] Feb 17 11:10:07.123: INFO: Created: latency-svc-8jlt7 Feb 17 11:10:07.225: INFO: Got endpoints: latency-svc-8jlt7 [2.120627416s] Feb 17 11:10:07.246: INFO: Created: latency-svc-mmlh4 Feb 17 11:10:07.281: INFO: Got endpoints: latency-svc-mmlh4 [2.125411337s] Feb 17 11:10:07.461: INFO: Created: latency-svc-q7hcf Feb 17 11:10:07.501: INFO: Got endpoints: latency-svc-q7hcf [2.15034987s] Feb 17 11:10:07.690: INFO: Created: latency-svc-z68zs Feb 17 11:10:07.841: INFO: Got endpoints: latency-svc-z68zs [2.474703478s] Feb 17 11:10:07.905: INFO: Created: latency-svc-djrq5 Feb 17 11:10:07.936: INFO: Got endpoints: latency-svc-djrq5 [2.468028738s] Feb 17 11:10:08.084: INFO: Created: latency-svc-zdx7d Feb 17 11:10:08.115: INFO: Got endpoints: latency-svc-zdx7d [2.457437659s] Feb 17 11:10:08.285: INFO: Created: latency-svc-9cmwh Feb 17 11:10:08.311: INFO: Got endpoints: latency-svc-9cmwh [2.603272065s] Feb 17 11:10:08.354: INFO: Created: latency-svc-fg6p7 Feb 17 11:10:08.512: INFO: Got endpoints: latency-svc-fg6p7 [2.667822931s] Feb 17 11:10:08.569: INFO: Created: latency-svc-twp2l Feb 17 11:10:08.803: INFO: Got endpoints: latency-svc-twp2l [2.738791317s] Feb 17 11:10:08.832: INFO: Created: latency-svc-d8vsg Feb 17 11:10:08.845: INFO: Got endpoints: latency-svc-d8vsg [2.44457767s] Feb 17 11:10:09.023: INFO: Created: latency-svc-nbpcj Feb 17 11:10:09.079: INFO: Got endpoints: latency-svc-nbpcj [2.452992851s] Feb 17 11:10:09.121: INFO: Created: latency-svc-r72qb Feb 17 11:10:09.180: INFO: Got endpoints: latency-svc-r72qb [2.495658435s] Feb 17 11:10:09.257: INFO: Created: latency-svc-sp852 Feb 17 11:10:09.263: INFO: Got endpoints: latency-svc-sp852 [2.367650053s] Feb 17 11:10:09.379: INFO: Created: latency-svc-298s6 Feb 17 11:10:09.414: INFO: Got endpoints: latency-svc-298s6 [2.3705336s] Feb 17 11:10:09.437: INFO: Created: latency-svc-tshhr Feb 17 11:10:09.532: INFO: Got endpoints: latency-svc-tshhr [2.449115888s] Feb 17 11:10:09.556: INFO: Created: latency-svc-w8pvz Feb 17 11:10:09.576: INFO: Got endpoints: latency-svc-w8pvz [2.350650228s] Feb 17 11:10:09.685: INFO: Created: latency-svc-w8sg9 Feb 17 11:10:09.697: INFO: Got endpoints: latency-svc-w8sg9 [2.415877695s] Feb 17 11:10:09.739: INFO: Created: latency-svc-dz9kv Feb 17 11:10:09.749: INFO: Got endpoints: latency-svc-dz9kv [2.247368663s] Feb 17 11:10:09.855: INFO: Created: latency-svc-2spq4 Feb 17 11:10:09.869: INFO: Got endpoints: latency-svc-2spq4 [2.027829252s] Feb 17 11:10:09.935: INFO: Created: latency-svc-x8klb Feb 17 11:10:10.091: INFO: Got endpoints: latency-svc-x8klb [2.154124647s] Feb 17 11:10:10.107: INFO: Created: latency-svc-75pfl Feb 17 11:10:10.127: INFO: Got endpoints: latency-svc-75pfl [2.011844419s] Feb 17 11:10:10.318: INFO: Created: latency-svc-m2t6d Feb 17 11:10:10.480: INFO: Created: latency-svc-pr4m8 Feb 17 11:10:10.483: INFO: Got endpoints: latency-svc-m2t6d [2.171189157s] Feb 17 11:10:10.588: INFO: Got endpoints: latency-svc-pr4m8 [2.075138439s] Feb 17 11:10:10.615: INFO: Created: latency-svc-gjhh4 Feb 17 11:10:10.819: INFO: Got endpoints: latency-svc-gjhh4 [2.015618815s] Feb 17 11:10:10.870: INFO: Created: latency-svc-b8cqq Feb 17 11:10:10.901: INFO: Got endpoints: latency-svc-b8cqq [2.055279863s] Feb 17 11:10:11.096: INFO: Created: latency-svc-xmmgv Feb 17 11:10:11.118: INFO: Got endpoints: latency-svc-xmmgv [2.038850882s] Feb 17 11:10:11.173: INFO: Created: latency-svc-4rwqx Feb 17 11:10:11.252: INFO: Got endpoints: latency-svc-4rwqx [2.071234625s] Feb 17 11:10:11.267: INFO: Created: latency-svc-w67f2 Feb 17 11:10:11.277: INFO: Got endpoints: latency-svc-w67f2 [2.014401016s] Feb 17 11:10:11.322: INFO: Created: latency-svc-xtwdr Feb 17 11:10:11.333: INFO: Got endpoints: latency-svc-xtwdr [1.918538413s] Feb 17 11:10:11.438: INFO: Created: latency-svc-nq47m Feb 17 11:10:11.459: INFO: Got endpoints: latency-svc-nq47m [1.926975112s] Feb 17 11:10:11.658: INFO: Created: latency-svc-9htt2 Feb 17 11:10:11.666: INFO: Got endpoints: latency-svc-9htt2 [2.090228003s] Feb 17 11:10:11.707: INFO: Created: latency-svc-n5rnx Feb 17 11:10:11.725: INFO: Got endpoints: latency-svc-n5rnx [2.027477632s] Feb 17 11:10:11.840: INFO: Created: latency-svc-fhtc6 Feb 17 11:10:11.840: INFO: Got endpoints: latency-svc-fhtc6 [2.09064093s] Feb 17 11:10:11.870: INFO: Created: latency-svc-6dvxc Feb 17 11:10:11.885: INFO: Got endpoints: latency-svc-6dvxc [2.015732758s] Feb 17 11:10:11.991: INFO: Created: latency-svc-27brj Feb 17 11:10:11.999: INFO: Got endpoints: latency-svc-27brj [1.906844627s] Feb 17 11:10:12.055: INFO: Created: latency-svc-mpq4h Feb 17 11:10:12.059: INFO: Got endpoints: latency-svc-mpq4h [1.93233197s] Feb 17 11:10:12.222: INFO: Created: latency-svc-zsbjm Feb 17 11:10:12.245: INFO: Got endpoints: latency-svc-zsbjm [1.761827259s] Feb 17 11:10:12.292: INFO: Created: latency-svc-7thwr Feb 17 11:10:12.705: INFO: Got endpoints: latency-svc-7thwr [2.116509032s] Feb 17 11:10:12.715: INFO: Created: latency-svc-zbr2w Feb 17 11:10:12.729: INFO: Got endpoints: latency-svc-zbr2w [1.909742972s] Feb 17 11:10:12.782: INFO: Created: latency-svc-wbx92 Feb 17 11:10:12.790: INFO: Got endpoints: latency-svc-wbx92 [1.888275776s] Feb 17 11:10:12.918: INFO: Created: latency-svc-mnwqs Feb 17 11:10:12.942: INFO: Got endpoints: latency-svc-mnwqs [1.823189268s] Feb 17 11:10:12.992: INFO: Created: latency-svc-nfgbt Feb 17 11:10:13.168: INFO: Got endpoints: latency-svc-nfgbt [1.915601336s] Feb 17 11:10:13.206: INFO: Created: latency-svc-tmfk5 Feb 17 11:10:13.236: INFO: Got endpoints: latency-svc-tmfk5 [1.959249158s] Feb 17 11:10:13.379: INFO: Created: latency-svc-gzbhg Feb 17 11:10:13.396: INFO: Got endpoints: latency-svc-gzbhg [2.063378501s] Feb 17 11:10:13.477: INFO: Created: latency-svc-x8pk4 Feb 17 11:10:13.637: INFO: Got endpoints: latency-svc-x8pk4 [2.177674787s] Feb 17 11:10:13.672: INFO: Created: latency-svc-jjpvq Feb 17 11:10:13.678: INFO: Got endpoints: latency-svc-jjpvq [2.011634863s] Feb 17 11:10:13.885: INFO: Created: latency-svc-6ppmt Feb 17 11:10:13.985: INFO: Got endpoints: latency-svc-6ppmt [2.260082498s] Feb 17 11:10:14.118: INFO: Created: latency-svc-jkbzb Feb 17 11:10:14.171: INFO: Got endpoints: latency-svc-jkbzb [2.330870918s] Feb 17 11:10:14.284: INFO: Created: latency-svc-67zmp Feb 17 11:10:14.304: INFO: Got endpoints: latency-svc-67zmp [2.419290776s] Feb 17 11:10:14.449: INFO: Created: latency-svc-vpxgt Feb 17 11:10:14.462: INFO: Got endpoints: latency-svc-vpxgt [2.46256693s] Feb 17 11:10:14.682: INFO: Created: latency-svc-6vmd8 Feb 17 11:10:14.699: INFO: Got endpoints: latency-svc-6vmd8 [2.639806197s] Feb 17 11:10:14.754: INFO: Created: latency-svc-s2qcj Feb 17 11:10:14.758: INFO: Got endpoints: latency-svc-s2qcj [2.512584721s] Feb 17 11:10:14.931: INFO: Created: latency-svc-5ddj8 Feb 17 11:10:14.945: INFO: Got endpoints: latency-svc-5ddj8 [2.239510219s] Feb 17 11:10:14.996: INFO: Created: latency-svc-2qnqj Feb 17 11:10:15.195: INFO: Got endpoints: latency-svc-2qnqj [2.465973932s] Feb 17 11:10:15.261: INFO: Created: latency-svc-xmfhk Feb 17 11:10:15.440: INFO: Got endpoints: latency-svc-xmfhk [2.650049826s] Feb 17 11:10:15.474: INFO: Created: latency-svc-plk64 Feb 17 11:10:15.519: INFO: Got endpoints: latency-svc-plk64 [2.576539908s] Feb 17 11:10:15.616: INFO: Created: latency-svc-wlxfh Feb 17 11:10:15.631: INFO: Got endpoints: latency-svc-wlxfh [2.462234474s] Feb 17 11:10:15.759: INFO: Created: latency-svc-pmx2c Feb 17 11:10:15.809: INFO: Got endpoints: latency-svc-pmx2c [2.572039175s] Feb 17 11:10:15.845: INFO: Created: latency-svc-mpx4j Feb 17 11:10:15.931: INFO: Got endpoints: latency-svc-mpx4j [2.533963902s] Feb 17 11:10:15.972: INFO: Created: latency-svc-lbq2f Feb 17 11:10:15.996: INFO: Got endpoints: latency-svc-lbq2f [2.358937425s] Feb 17 11:10:16.099: INFO: Created: latency-svc-q2fwp Feb 17 11:10:16.117: INFO: Got endpoints: latency-svc-q2fwp [2.43909577s] Feb 17 11:10:16.166: INFO: Created: latency-svc-wxlml Feb 17 11:10:16.317: INFO: Got endpoints: latency-svc-wxlml [2.331547027s] Feb 17 11:10:16.349: INFO: Created: latency-svc-pbr5g Feb 17 11:10:16.364: INFO: Got endpoints: latency-svc-pbr5g [2.192670996s] Feb 17 11:10:16.391: INFO: Created: latency-svc-zsfn4 Feb 17 11:10:16.493: INFO: Got endpoints: latency-svc-zsfn4 [2.188349006s] Feb 17 11:10:16.524: INFO: Created: latency-svc-gzc8v Feb 17 11:10:16.543: INFO: Got endpoints: latency-svc-gzc8v [2.081043675s] Feb 17 11:10:16.543: INFO: Latencies: [132.669464ms 207.837548ms 314.100555ms 514.213638ms 755.764058ms 925.449662ms 996.268431ms 1.110398769s 1.167507721s 1.379262169s 1.683708359s 1.761827259s 1.823189268s 1.878258607s 1.888275776s 1.906022106s 1.906844627s 1.909742972s 1.912268688s 1.915601336s 1.918538413s 1.926975112s 1.927304545s 1.928460099s 1.929265573s 1.93233197s 1.936286968s 1.959249158s 2.011634863s 2.011844419s 2.014401016s 2.015618815s 2.015732758s 2.027477632s 2.027829252s 2.034186425s 2.038850882s 2.042470427s 2.043189152s 2.055279863s 2.06326161s 2.063378501s 2.071234625s 2.073112389s 2.075138439s 2.075761416s 2.081043675s 2.086990657s 2.090228003s 2.09064093s 2.094918765s 2.095185513s 2.101760057s 2.108718453s 2.108740668s 2.111353808s 2.112116329s 2.116509032s 2.120627416s 2.125411337s 2.12678393s 2.13007194s 2.15034987s 2.154124647s 2.164278948s 2.168734606s 2.171189157s 2.177674787s 2.178985374s 2.181479221s 2.182854207s 2.188349006s 2.19115073s 2.192670996s 2.197798772s 2.199063798s 2.199468079s 2.202788032s 2.229735407s 2.231290435s 2.238789997s 2.239166484s 2.239510219s 2.247368663s 2.24856383s 2.256547966s 2.258719535s 2.260082498s 2.262035858s 2.263143978s 2.263640412s 2.263803723s 2.27425401s 2.277692799s 2.278463988s 2.282706533s 2.289692297s 2.293039687s 2.293683738s 2.317241019s 2.319588946s 2.323412848s 2.325811558s 2.330870918s 2.331547027s 2.342281809s 2.350650228s 2.358937425s 2.367650053s 2.3705336s 2.373794022s 2.384104287s 2.385323872s 2.386987708s 2.391263019s 2.396268219s 2.398155364s 2.402047638s 2.415877695s 2.419290776s 2.430072892s 2.433105825s 2.43909577s 2.44457767s 2.445776292s 2.449115888s 2.452992851s 2.457437659s 2.457862422s 2.462020563s 2.462234474s 2.46256693s 2.464697534s 2.465973932s 2.468028738s 2.474703478s 2.495658435s 2.500634427s 2.512584721s 2.520050405s 2.524699541s 2.533963902s 2.541177727s 2.541832486s 2.546888743s 2.547604082s 2.560247385s 2.5614297s 2.562483209s 2.566956483s 2.567294421s 2.567405242s 2.572039175s 2.574040298s 2.576539908s 2.603272065s 2.612354284s 2.618587374s 2.639806197s 2.650049826s 2.653155384s 2.667376389s 2.667822931s 2.67436371s 2.674969011s 2.680518285s 2.691035185s 2.692376989s 2.695114328s 2.696414194s 2.698675216s 2.698783926s 2.706252468s 2.707918436s 2.73344191s 2.738791317s 2.740478292s 2.745014344s 2.755572205s 2.773660575s 2.77544517s 2.847571727s 2.851116802s 2.852766203s 2.857076437s 2.866077941s 2.911707587s 2.969551586s 3.029909607s 3.110595658s 3.121439665s 3.176689093s 3.203564053s 3.235049359s 3.259828635s 3.282759085s 3.302474354s 3.303258039s 3.350498464s 3.353104161s] Feb 17 11:10:16.543: INFO: 50 %ile: 2.319588946s Feb 17 11:10:16.543: INFO: 90 %ile: 2.77544517s Feb 17 11:10:16.543: INFO: 99 %ile: 3.350498464s Feb 17 11:10:16.543: INFO: Total sample count: 200 [AfterEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 17 11:10:16.543: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-svc-latency-d25k6" for this suite. Feb 17 11:11:10.749: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 17 11:11:10.817: INFO: namespace: e2e-tests-svc-latency-d25k6, resource: bindings, ignored listing per whitelist Feb 17 11:11:10.861: INFO: namespace e2e-tests-svc-latency-d25k6 deletion completed in 54.298426875s • [SLOW TEST:96.525 seconds] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 17 11:11:10.862: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: getting the auto-created API token Feb 17 11:11:11.637: INFO: created pod pod-service-account-defaultsa Feb 17 11:11:11.637: INFO: pod pod-service-account-defaultsa service account token volume mount: true Feb 17 11:11:11.665: INFO: created pod pod-service-account-mountsa Feb 17 11:11:11.665: INFO: pod pod-service-account-mountsa service account token volume mount: true Feb 17 11:11:11.697: INFO: created pod pod-service-account-nomountsa Feb 17 11:11:11.697: INFO: pod pod-service-account-nomountsa service account token volume mount: false Feb 17 11:11:11.818: INFO: created pod pod-service-account-defaultsa-mountspec Feb 17 11:11:11.818: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true Feb 17 11:11:11.873: INFO: created pod pod-service-account-mountsa-mountspec Feb 17 11:11:11.874: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true Feb 17 11:11:11.895: INFO: created pod pod-service-account-nomountsa-mountspec Feb 17 11:11:11.895: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true Feb 17 11:11:11.987: INFO: created pod pod-service-account-defaultsa-nomountspec Feb 17 11:11:11.987: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false Feb 17 11:11:12.044: INFO: created pod pod-service-account-mountsa-nomountspec Feb 17 11:11:12.045: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false Feb 17 11:11:13.653: INFO: created pod pod-service-account-nomountsa-nomountspec Feb 17 11:11:13.654: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 17 11:11:13.654: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-svcaccounts-cf4vc" for this suite. Feb 17 11:11:42.836: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 17 11:11:42.954: INFO: namespace: e2e-tests-svcaccounts-cf4vc, resource: bindings, ignored listing per whitelist Feb 17 11:11:43.035: INFO: namespace e2e-tests-svcaccounts-cf4vc deletion completed in 27.553741813s • [SLOW TEST:32.174 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:22 should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 17 11:11:43.036: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating the pod Feb 17 11:11:55.949: INFO: Successfully updated pod "labelsupdate474371ea-5176-11ea-a180-0242ac110008" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 17 11:11:58.064: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-7rx5x" for this suite. Feb 17 11:12:22.136: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 17 11:12:22.321: INFO: namespace: e2e-tests-projected-7rx5x, resource: bindings, ignored listing per whitelist Feb 17 11:12:22.325: INFO: namespace e2e-tests-projected-7rx5x deletion completed in 24.250024332s • [SLOW TEST:39.289 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 17 11:12:22.326: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-upd-5ed269f1-5176-11ea-a180-0242ac110008 STEP: Creating the pod STEP: Updating configmap configmap-test-upd-5ed269f1-5176-11ea-a180-0242ac110008 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 17 11:12:33.026: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-gfg2w" for this suite. Feb 17 11:12:57.073: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 17 11:12:57.204: INFO: namespace: e2e-tests-configmap-gfg2w, resource: bindings, ignored listing per whitelist Feb 17 11:12:57.253: INFO: namespace e2e-tests-configmap-gfg2w deletion completed in 24.218889122s • [SLOW TEST:34.928 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 17 11:12:57.254: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with downward pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod pod-subpath-test-downwardapi-vt58 STEP: Creating a pod to test atomic-volume-subpath Feb 17 11:12:57.478: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-vt58" in namespace "e2e-tests-subpath-x6zzt" to be "success or failure" Feb 17 11:12:57.490: INFO: Pod "pod-subpath-test-downwardapi-vt58": Phase="Pending", Reason="", readiness=false. Elapsed: 11.387131ms Feb 17 11:12:59.506: INFO: Pod "pod-subpath-test-downwardapi-vt58": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027313023s Feb 17 11:13:01.555: INFO: Pod "pod-subpath-test-downwardapi-vt58": Phase="Pending", Reason="", readiness=false. Elapsed: 4.076395689s Feb 17 11:13:03.576: INFO: Pod "pod-subpath-test-downwardapi-vt58": Phase="Pending", Reason="", readiness=false. Elapsed: 6.097825818s Feb 17 11:13:05.599: INFO: Pod "pod-subpath-test-downwardapi-vt58": Phase="Pending", Reason="", readiness=false. Elapsed: 8.120635409s Feb 17 11:13:07.610: INFO: Pod "pod-subpath-test-downwardapi-vt58": Phase="Pending", Reason="", readiness=false. Elapsed: 10.131823771s Feb 17 11:13:11.376: INFO: Pod "pod-subpath-test-downwardapi-vt58": Phase="Pending", Reason="", readiness=false. Elapsed: 13.897523411s Feb 17 11:13:13.423: INFO: Pod "pod-subpath-test-downwardapi-vt58": Phase="Pending", Reason="", readiness=false. Elapsed: 15.944473946s Feb 17 11:13:15.440: INFO: Pod "pod-subpath-test-downwardapi-vt58": Phase="Running", Reason="", readiness=false. Elapsed: 17.961271447s Feb 17 11:13:17.476: INFO: Pod "pod-subpath-test-downwardapi-vt58": Phase="Running", Reason="", readiness=false. Elapsed: 19.996983593s Feb 17 11:13:19.500: INFO: Pod "pod-subpath-test-downwardapi-vt58": Phase="Running", Reason="", readiness=false. Elapsed: 22.021246426s Feb 17 11:13:21.521: INFO: Pod "pod-subpath-test-downwardapi-vt58": Phase="Running", Reason="", readiness=false. Elapsed: 24.04225861s Feb 17 11:13:23.539: INFO: Pod "pod-subpath-test-downwardapi-vt58": Phase="Running", Reason="", readiness=false. Elapsed: 26.060862463s Feb 17 11:13:25.569: INFO: Pod "pod-subpath-test-downwardapi-vt58": Phase="Running", Reason="", readiness=false. Elapsed: 28.090049352s Feb 17 11:13:27.642: INFO: Pod "pod-subpath-test-downwardapi-vt58": Phase="Running", Reason="", readiness=false. Elapsed: 30.163764349s Feb 17 11:13:29.669: INFO: Pod "pod-subpath-test-downwardapi-vt58": Phase="Running", Reason="", readiness=false. Elapsed: 32.190855945s Feb 17 11:13:31.686: INFO: Pod "pod-subpath-test-downwardapi-vt58": Phase="Running", Reason="", readiness=false. Elapsed: 34.206995907s Feb 17 11:13:33.695: INFO: Pod "pod-subpath-test-downwardapi-vt58": Phase="Succeeded", Reason="", readiness=false. Elapsed: 36.216609413s STEP: Saw pod success Feb 17 11:13:33.695: INFO: Pod "pod-subpath-test-downwardapi-vt58" satisfied condition "success or failure" Feb 17 11:13:33.699: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-subpath-test-downwardapi-vt58 container test-container-subpath-downwardapi-vt58: STEP: delete the pod Feb 17 11:13:34.661: INFO: Waiting for pod pod-subpath-test-downwardapi-vt58 to disappear Feb 17 11:13:34.749: INFO: Pod pod-subpath-test-downwardapi-vt58 no longer exists STEP: Deleting pod pod-subpath-test-downwardapi-vt58 Feb 17 11:13:34.749: INFO: Deleting pod "pod-subpath-test-downwardapi-vt58" in namespace "e2e-tests-subpath-x6zzt" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 17 11:13:34.770: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-subpath-x6zzt" for this suite. Feb 17 11:13:40.904: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 17 11:13:40.988: INFO: namespace: e2e-tests-subpath-x6zzt, resource: bindings, ignored listing per whitelist Feb 17 11:13:41.085: INFO: namespace e2e-tests-subpath-x6zzt deletion completed in 6.291799874s • [SLOW TEST:43.832 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with downward pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl label should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 17 11:13:41.086: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1052 STEP: creating the pod Feb 17 11:13:41.296: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-mgz5g' Feb 17 11:13:44.436: INFO: stderr: "" Feb 17 11:13:44.436: INFO: stdout: "pod/pause created\n" Feb 17 11:13:44.436: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] Feb 17 11:13:44.436: INFO: Waiting up to 5m0s for pod "pause" in namespace "e2e-tests-kubectl-mgz5g" to be "running and ready" Feb 17 11:13:44.452: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 15.880054ms Feb 17 11:13:46.491: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.054593674s Feb 17 11:13:48.537: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 4.100934339s Feb 17 11:13:50.566: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 6.129974457s Feb 17 11:13:52.612: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 8.175371692s Feb 17 11:13:54.638: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 10.20187705s Feb 17 11:13:54.639: INFO: Pod "pause" satisfied condition "running and ready" Feb 17 11:13:54.639: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] [It] should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: adding the label testing-label with value testing-label-value to a pod Feb 17 11:13:54.639: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=e2e-tests-kubectl-mgz5g' Feb 17 11:13:54.769: INFO: stderr: "" Feb 17 11:13:54.769: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod has the label testing-label with the value testing-label-value Feb 17 11:13:54.769: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=e2e-tests-kubectl-mgz5g' Feb 17 11:13:54.878: INFO: stderr: "" Feb 17 11:13:54.878: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 10s testing-label-value\n" STEP: removing the label testing-label of a pod Feb 17 11:13:54.878: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=e2e-tests-kubectl-mgz5g' Feb 17 11:13:55.033: INFO: stderr: "" Feb 17 11:13:55.033: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod doesn't have the label testing-label Feb 17 11:13:55.033: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=e2e-tests-kubectl-mgz5g' Feb 17 11:13:55.116: INFO: stderr: "" Feb 17 11:13:55.116: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 11s \n" [AfterEach] [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1059 STEP: using delete to clean up resources Feb 17 11:13:55.116: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-mgz5g' Feb 17 11:13:55.229: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Feb 17 11:13:55.229: INFO: stdout: "pod \"pause\" force deleted\n" Feb 17 11:13:55.229: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=e2e-tests-kubectl-mgz5g' Feb 17 11:13:55.395: INFO: stderr: "No resources found.\n" Feb 17 11:13:55.395: INFO: stdout: "" Feb 17 11:13:55.395: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=e2e-tests-kubectl-mgz5g -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Feb 17 11:13:55.482: INFO: stderr: "" Feb 17 11:13:55.483: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 17 11:13:55.483: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-mgz5g" for this suite. Feb 17 11:14:01.535: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 17 11:14:01.683: INFO: namespace: e2e-tests-kubectl-mgz5g, resource: bindings, ignored listing per whitelist Feb 17 11:14:01.699: INFO: namespace e2e-tests-kubectl-mgz5g deletion completed in 6.206949869s • [SLOW TEST:20.613 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 17 11:14:01.699: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a watch on configmaps with label A STEP: creating a watch on configmaps with label B STEP: creating a watch on configmaps with label A or B STEP: creating a configmap with label A and ensuring the correct watchers observe the notification Feb 17 11:14:01.912: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-dfsgg,SelfLink:/api/v1/namespaces/e2e-tests-watch-dfsgg/configmaps/e2e-watch-test-configmap-a,UID:99f3e52d-5176-11ea-a994-fa163e34d433,ResourceVersion:21968603,Generation:0,CreationTimestamp:2020-02-17 11:14:01 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} Feb 17 11:14:01.913: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-dfsgg,SelfLink:/api/v1/namespaces/e2e-tests-watch-dfsgg/configmaps/e2e-watch-test-configmap-a,UID:99f3e52d-5176-11ea-a994-fa163e34d433,ResourceVersion:21968603,Generation:0,CreationTimestamp:2020-02-17 11:14:01 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: modifying configmap A and ensuring the correct watchers observe the notification Feb 17 11:14:11.943: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-dfsgg,SelfLink:/api/v1/namespaces/e2e-tests-watch-dfsgg/configmaps/e2e-watch-test-configmap-a,UID:99f3e52d-5176-11ea-a994-fa163e34d433,ResourceVersion:21968616,Generation:0,CreationTimestamp:2020-02-17 11:14:01 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} Feb 17 11:14:11.944: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-dfsgg,SelfLink:/api/v1/namespaces/e2e-tests-watch-dfsgg/configmaps/e2e-watch-test-configmap-a,UID:99f3e52d-5176-11ea-a994-fa163e34d433,ResourceVersion:21968616,Generation:0,CreationTimestamp:2020-02-17 11:14:01 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying configmap A again and ensuring the correct watchers observe the notification Feb 17 11:14:21.972: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-dfsgg,SelfLink:/api/v1/namespaces/e2e-tests-watch-dfsgg/configmaps/e2e-watch-test-configmap-a,UID:99f3e52d-5176-11ea-a994-fa163e34d433,ResourceVersion:21968629,Generation:0,CreationTimestamp:2020-02-17 11:14:01 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Feb 17 11:14:21.973: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-dfsgg,SelfLink:/api/v1/namespaces/e2e-tests-watch-dfsgg/configmaps/e2e-watch-test-configmap-a,UID:99f3e52d-5176-11ea-a994-fa163e34d433,ResourceVersion:21968629,Generation:0,CreationTimestamp:2020-02-17 11:14:01 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: deleting configmap A and ensuring the correct watchers observe the notification Feb 17 11:14:31.998: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-dfsgg,SelfLink:/api/v1/namespaces/e2e-tests-watch-dfsgg/configmaps/e2e-watch-test-configmap-a,UID:99f3e52d-5176-11ea-a994-fa163e34d433,ResourceVersion:21968642,Generation:0,CreationTimestamp:2020-02-17 11:14:01 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Feb 17 11:14:31.998: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-dfsgg,SelfLink:/api/v1/namespaces/e2e-tests-watch-dfsgg/configmaps/e2e-watch-test-configmap-a,UID:99f3e52d-5176-11ea-a994-fa163e34d433,ResourceVersion:21968642,Generation:0,CreationTimestamp:2020-02-17 11:14:01 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: creating a configmap with label B and ensuring the correct watchers observe the notification Feb 17 11:14:42.044: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-dfsgg,SelfLink:/api/v1/namespaces/e2e-tests-watch-dfsgg/configmaps/e2e-watch-test-configmap-b,UID:b1dac2a7-5176-11ea-a994-fa163e34d433,ResourceVersion:21968655,Generation:0,CreationTimestamp:2020-02-17 11:14:42 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} Feb 17 11:14:42.044: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-dfsgg,SelfLink:/api/v1/namespaces/e2e-tests-watch-dfsgg/configmaps/e2e-watch-test-configmap-b,UID:b1dac2a7-5176-11ea-a994-fa163e34d433,ResourceVersion:21968655,Generation:0,CreationTimestamp:2020-02-17 11:14:42 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: deleting configmap B and ensuring the correct watchers observe the notification Feb 17 11:14:52.068: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-dfsgg,SelfLink:/api/v1/namespaces/e2e-tests-watch-dfsgg/configmaps/e2e-watch-test-configmap-b,UID:b1dac2a7-5176-11ea-a994-fa163e34d433,ResourceVersion:21968668,Generation:0,CreationTimestamp:2020-02-17 11:14:42 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} Feb 17 11:14:52.068: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-dfsgg,SelfLink:/api/v1/namespaces/e2e-tests-watch-dfsgg/configmaps/e2e-watch-test-configmap-b,UID:b1dac2a7-5176-11ea-a994-fa163e34d433,ResourceVersion:21968668,Generation:0,CreationTimestamp:2020-02-17 11:14:42 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 17 11:15:02.069: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-watch-dfsgg" for this suite. Feb 17 11:15:08.212: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 17 11:15:08.347: INFO: namespace: e2e-tests-watch-dfsgg, resource: bindings, ignored listing per whitelist Feb 17 11:15:08.407: INFO: namespace e2e-tests-watch-dfsgg deletion completed in 6.294592143s • [SLOW TEST:66.709 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 17 11:15:08.408: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpa': should get the expected 'State' STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpof': should get the expected 'State' STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpn': should get the expected 'State' STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance] [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 17 11:16:09.027: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-runtime-cj7wd" for this suite. Feb 17 11:16:17.072: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 17 11:16:17.146: INFO: namespace: e2e-tests-container-runtime-cj7wd, resource: bindings, ignored listing per whitelist Feb 17 11:16:17.202: INFO: namespace e2e-tests-container-runtime-cj7wd deletion completed in 8.167532633s • [SLOW TEST:68.794 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:37 when starting a container that exits /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 17 11:16:17.202: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-map-eab777ff-5176-11ea-a180-0242ac110008 STEP: Creating a pod to test consume configMaps Feb 17 11:16:17.429: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-eab8bff4-5176-11ea-a180-0242ac110008" in namespace "e2e-tests-projected-vzxvb" to be "success or failure" Feb 17 11:16:17.448: INFO: Pod "pod-projected-configmaps-eab8bff4-5176-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 17.990901ms Feb 17 11:16:19.462: INFO: Pod "pod-projected-configmaps-eab8bff4-5176-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032697462s Feb 17 11:16:21.476: INFO: Pod "pod-projected-configmaps-eab8bff4-5176-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.046599848s Feb 17 11:16:24.397: INFO: Pod "pod-projected-configmaps-eab8bff4-5176-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.966900194s Feb 17 11:16:26.412: INFO: Pod "pod-projected-configmaps-eab8bff4-5176-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.98278134s Feb 17 11:16:28.441: INFO: Pod "pod-projected-configmaps-eab8bff4-5176-11ea-a180-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.011823591s STEP: Saw pod success Feb 17 11:16:28.442: INFO: Pod "pod-projected-configmaps-eab8bff4-5176-11ea-a180-0242ac110008" satisfied condition "success or failure" Feb 17 11:16:28.456: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-eab8bff4-5176-11ea-a180-0242ac110008 container projected-configmap-volume-test: STEP: delete the pod Feb 17 11:16:28.661: INFO: Waiting for pod pod-projected-configmaps-eab8bff4-5176-11ea-a180-0242ac110008 to disappear Feb 17 11:16:29.129: INFO: Pod pod-projected-configmaps-eab8bff4-5176-11ea-a180-0242ac110008 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 17 11:16:29.129: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-vzxvb" for this suite. Feb 17 11:16:35.964: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 17 11:16:36.031: INFO: namespace: e2e-tests-projected-vzxvb, resource: bindings, ignored listing per whitelist Feb 17 11:16:36.137: INFO: namespace e2e-tests-projected-vzxvb deletion completed in 6.983376105s • [SLOW TEST:18.935 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Guestbook application should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 17 11:16:36.138: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating all guestbook components Feb 17 11:16:36.394: INFO: apiVersion: v1 kind: Service metadata: name: redis-slave labels: app: redis role: slave tier: backend spec: ports: - port: 6379 selector: app: redis role: slave tier: backend Feb 17 11:16:36.395: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-bzwnt' Feb 17 11:16:36.912: INFO: stderr: "" Feb 17 11:16:36.913: INFO: stdout: "service/redis-slave created\n" Feb 17 11:16:36.913: INFO: apiVersion: v1 kind: Service metadata: name: redis-master labels: app: redis role: master tier: backend spec: ports: - port: 6379 targetPort: 6379 selector: app: redis role: master tier: backend Feb 17 11:16:36.914: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-bzwnt' Feb 17 11:16:37.228: INFO: stderr: "" Feb 17 11:16:37.228: INFO: stdout: "service/redis-master created\n" Feb 17 11:16:37.229: INFO: apiVersion: v1 kind: Service metadata: name: frontend labels: app: guestbook tier: frontend spec: # if your cluster supports it, uncomment the following to automatically create # an external load-balanced IP for the frontend service. # type: LoadBalancer ports: - port: 80 selector: app: guestbook tier: frontend Feb 17 11:16:37.230: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-bzwnt' Feb 17 11:16:37.845: INFO: stderr: "" Feb 17 11:16:37.845: INFO: stdout: "service/frontend created\n" Feb 17 11:16:37.848: INFO: apiVersion: extensions/v1beta1 kind: Deployment metadata: name: frontend spec: replicas: 3 template: metadata: labels: app: guestbook tier: frontend spec: containers: - name: php-redis image: gcr.io/google-samples/gb-frontend:v6 resources: requests: cpu: 100m memory: 100Mi env: - name: GET_HOSTS_FROM value: dns # If your cluster config does not include a dns service, then to # instead access environment variables to find service host # info, comment out the 'value: dns' line above, and uncomment the # line below: # value: env ports: - containerPort: 80 Feb 17 11:16:37.848: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-bzwnt' Feb 17 11:16:38.227: INFO: stderr: "" Feb 17 11:16:38.228: INFO: stdout: "deployment.extensions/frontend created\n" Feb 17 11:16:38.229: INFO: apiVersion: extensions/v1beta1 kind: Deployment metadata: name: redis-master spec: replicas: 1 template: metadata: labels: app: redis role: master tier: backend spec: containers: - name: master image: gcr.io/kubernetes-e2e-test-images/redis:1.0 resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Feb 17 11:16:38.229: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-bzwnt' Feb 17 11:16:38.651: INFO: stderr: "" Feb 17 11:16:38.651: INFO: stdout: "deployment.extensions/redis-master created\n" Feb 17 11:16:38.652: INFO: apiVersion: extensions/v1beta1 kind: Deployment metadata: name: redis-slave spec: replicas: 2 template: metadata: labels: app: redis role: slave tier: backend spec: containers: - name: slave image: gcr.io/google-samples/gb-redisslave:v3 resources: requests: cpu: 100m memory: 100Mi env: - name: GET_HOSTS_FROM value: dns # If your cluster config does not include a dns service, then to # instead access an environment variable to find the master # service's host, comment out the 'value: dns' line above, and # uncomment the line below: # value: env ports: - containerPort: 6379 Feb 17 11:16:38.653: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-bzwnt' Feb 17 11:16:39.087: INFO: stderr: "" Feb 17 11:16:39.088: INFO: stdout: "deployment.extensions/redis-slave created\n" STEP: validating guestbook app Feb 17 11:16:39.088: INFO: Waiting for all frontend pods to be Running. Feb 17 11:17:04.140: INFO: Waiting for frontend to serve content. Feb 17 11:17:05.958: INFO: Trying to add a new entry to the guestbook. Feb 17 11:17:06.078: INFO: Verifying that added entry can be retrieved. STEP: using delete to clean up resources Feb 17 11:17:06.125: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-bzwnt' Feb 17 11:17:06.660: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Feb 17 11:17:06.660: INFO: stdout: "service \"redis-slave\" force deleted\n" STEP: using delete to clean up resources Feb 17 11:17:06.661: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-bzwnt' Feb 17 11:17:07.000: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Feb 17 11:17:07.000: INFO: stdout: "service \"redis-master\" force deleted\n" STEP: using delete to clean up resources Feb 17 11:17:07.001: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-bzwnt' Feb 17 11:17:07.277: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Feb 17 11:17:07.277: INFO: stdout: "service \"frontend\" force deleted\n" STEP: using delete to clean up resources Feb 17 11:17:07.278: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-bzwnt' Feb 17 11:17:07.445: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Feb 17 11:17:07.445: INFO: stdout: "deployment.extensions \"frontend\" force deleted\n" STEP: using delete to clean up resources Feb 17 11:17:07.447: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-bzwnt' Feb 17 11:17:07.815: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Feb 17 11:17:07.815: INFO: stdout: "deployment.extensions \"redis-master\" force deleted\n" STEP: using delete to clean up resources Feb 17 11:17:07.816: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-bzwnt' Feb 17 11:17:07.977: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Feb 17 11:17:07.977: INFO: stdout: "deployment.extensions \"redis-slave\" force deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 17 11:17:07.977: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-bzwnt" for this suite. Feb 17 11:17:54.254: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 17 11:17:54.286: INFO: namespace: e2e-tests-kubectl-bzwnt, resource: bindings, ignored listing per whitelist Feb 17 11:17:54.492: INFO: namespace e2e-tests-kubectl-bzwnt deletion completed in 46.476379212s • [SLOW TEST:78.354 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Guestbook application /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 17 11:17:54.492: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Feb 17 11:17:54.755: INFO: Waiting up to 5m0s for pod "downwardapi-volume-24ba86b4-5177-11ea-a180-0242ac110008" in namespace "e2e-tests-downward-api-pkrwg" to be "success or failure" Feb 17 11:17:54.776: INFO: Pod "downwardapi-volume-24ba86b4-5177-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 21.377548ms Feb 17 11:17:57.872: INFO: Pod "downwardapi-volume-24ba86b4-5177-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 3.116987254s Feb 17 11:17:59.891: INFO: Pod "downwardapi-volume-24ba86b4-5177-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 5.135874669s Feb 17 11:18:01.928: INFO: Pod "downwardapi-volume-24ba86b4-5177-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 7.173534367s Feb 17 11:18:03.940: INFO: Pod "downwardapi-volume-24ba86b4-5177-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 9.185401883s Feb 17 11:18:05.970: INFO: Pod "downwardapi-volume-24ba86b4-5177-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 11.215294208s Feb 17 11:18:07.989: INFO: Pod "downwardapi-volume-24ba86b4-5177-11ea-a180-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.234416205s STEP: Saw pod success Feb 17 11:18:07.989: INFO: Pod "downwardapi-volume-24ba86b4-5177-11ea-a180-0242ac110008" satisfied condition "success or failure" Feb 17 11:18:07.995: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-24ba86b4-5177-11ea-a180-0242ac110008 container client-container: STEP: delete the pod Feb 17 11:18:08.929: INFO: Waiting for pod downwardapi-volume-24ba86b4-5177-11ea-a180-0242ac110008 to disappear Feb 17 11:18:08.957: INFO: Pod downwardapi-volume-24ba86b4-5177-11ea-a180-0242ac110008 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 17 11:18:08.958: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-pkrwg" for this suite. Feb 17 11:18:15.147: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 17 11:18:15.271: INFO: namespace: e2e-tests-downward-api-pkrwg, resource: bindings, ignored listing per whitelist Feb 17 11:18:15.338: INFO: namespace e2e-tests-downward-api-pkrwg deletion completed in 6.364570539s • [SLOW TEST:20.846 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run job should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 17 11:18:15.339: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1454 [It] should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine Feb 17 11:18:15.616: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=e2e-tests-kubectl-t4758' Feb 17 11:18:15.759: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Feb 17 11:18:15.760: INFO: stdout: "job.batch/e2e-test-nginx-job created\n" STEP: verifying the job e2e-test-nginx-job was created [AfterEach] [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1459 Feb 17 11:18:15.788: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete jobs e2e-test-nginx-job --namespace=e2e-tests-kubectl-t4758' Feb 17 11:18:16.320: INFO: stderr: "" Feb 17 11:18:16.320: INFO: stdout: "job.batch \"e2e-test-nginx-job\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 17 11:18:16.320: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-t4758" for this suite. Feb 17 11:18:40.562: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 17 11:18:40.797: INFO: namespace: e2e-tests-kubectl-t4758, resource: bindings, ignored listing per whitelist Feb 17 11:18:40.815: INFO: namespace e2e-tests-kubectl-t4758 deletion completed in 24.475900285s • [SLOW TEST:25.476 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 17 11:18:40.815: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should write entries to /etc/hosts [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 17 11:18:49.044: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubelet-test-stfml" for this suite. Feb 17 11:19:35.091: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 17 11:19:35.177: INFO: namespace: e2e-tests-kubelet-test-stfml, resource: bindings, ignored listing per whitelist Feb 17 11:19:35.255: INFO: namespace e2e-tests-kubelet-test-stfml deletion completed in 46.202009225s • [SLOW TEST:54.440 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when scheduling a busybox Pod with hostAliases /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:136 should write entries to /etc/hosts [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 17 11:19:35.255: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Feb 17 11:22:37.830: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 17 11:22:37.921: INFO: Pod pod-with-poststart-exec-hook still exists Feb 17 11:22:39.921: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 17 11:22:39.938: INFO: Pod pod-with-poststart-exec-hook still exists Feb 17 11:22:41.922: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 17 11:22:41.943: INFO: Pod pod-with-poststart-exec-hook still exists Feb 17 11:22:43.922: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 17 11:22:43.949: INFO: Pod pod-with-poststart-exec-hook still exists Feb 17 11:22:45.922: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 17 11:22:45.943: INFO: Pod pod-with-poststart-exec-hook still exists Feb 17 11:22:47.922: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 17 11:22:47.938: INFO: Pod pod-with-poststart-exec-hook still exists Feb 17 11:22:49.922: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 17 11:22:49.937: INFO: Pod pod-with-poststart-exec-hook still exists Feb 17 11:22:51.921: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 17 11:22:51.940: INFO: Pod pod-with-poststart-exec-hook still exists Feb 17 11:22:53.922: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 17 11:22:53.961: INFO: Pod pod-with-poststart-exec-hook still exists Feb 17 11:22:55.921: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 17 11:22:55.955: INFO: Pod pod-with-poststart-exec-hook still exists Feb 17 11:22:57.922: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 17 11:22:57.997: INFO: Pod pod-with-poststart-exec-hook still exists Feb 17 11:22:59.922: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Feb 17 11:22:59.977: INFO: Pod pod-with-poststart-exec-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 17 11:22:59.977: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-h6b9c" for this suite. Feb 17 11:23:24.038: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 17 11:23:24.207: INFO: namespace: e2e-tests-container-lifecycle-hook-h6b9c, resource: bindings, ignored listing per whitelist Feb 17 11:23:24.291: INFO: namespace e2e-tests-container-lifecycle-hook-h6b9c deletion completed in 24.305980163s • [SLOW TEST:229.036 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40 should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 17 11:23:24.291: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with secret pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod pod-subpath-test-secret-gg2z STEP: Creating a pod to test atomic-volume-subpath Feb 17 11:23:24.548: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-gg2z" in namespace "e2e-tests-subpath-f2bnx" to be "success or failure" Feb 17 11:23:24.559: INFO: Pod "pod-subpath-test-secret-gg2z": Phase="Pending", Reason="", readiness=false. Elapsed: 11.064535ms Feb 17 11:23:26.590: INFO: Pod "pod-subpath-test-secret-gg2z": Phase="Pending", Reason="", readiness=false. Elapsed: 2.041516069s Feb 17 11:23:28.746: INFO: Pod "pod-subpath-test-secret-gg2z": Phase="Pending", Reason="", readiness=false. Elapsed: 4.197699707s Feb 17 11:23:30.777: INFO: Pod "pod-subpath-test-secret-gg2z": Phase="Pending", Reason="", readiness=false. Elapsed: 6.228894026s Feb 17 11:23:32.814: INFO: Pod "pod-subpath-test-secret-gg2z": Phase="Pending", Reason="", readiness=false. Elapsed: 8.26585744s Feb 17 11:23:35.149: INFO: Pod "pod-subpath-test-secret-gg2z": Phase="Pending", Reason="", readiness=false. Elapsed: 10.601318192s Feb 17 11:23:37.346: INFO: Pod "pod-subpath-test-secret-gg2z": Phase="Pending", Reason="", readiness=false. Elapsed: 12.797620434s Feb 17 11:23:39.363: INFO: Pod "pod-subpath-test-secret-gg2z": Phase="Pending", Reason="", readiness=false. Elapsed: 14.814796045s Feb 17 11:23:41.523: INFO: Pod "pod-subpath-test-secret-gg2z": Phase="Running", Reason="", readiness=true. Elapsed: 16.975446771s Feb 17 11:23:43.541: INFO: Pod "pod-subpath-test-secret-gg2z": Phase="Running", Reason="", readiness=false. Elapsed: 18.992753975s Feb 17 11:23:45.560: INFO: Pod "pod-subpath-test-secret-gg2z": Phase="Running", Reason="", readiness=false. Elapsed: 21.012357149s Feb 17 11:23:47.582: INFO: Pod "pod-subpath-test-secret-gg2z": Phase="Running", Reason="", readiness=false. Elapsed: 23.034457779s Feb 17 11:23:49.601: INFO: Pod "pod-subpath-test-secret-gg2z": Phase="Running", Reason="", readiness=false. Elapsed: 25.052520853s Feb 17 11:23:51.614: INFO: Pod "pod-subpath-test-secret-gg2z": Phase="Running", Reason="", readiness=false. Elapsed: 27.065988001s Feb 17 11:23:53.634: INFO: Pod "pod-subpath-test-secret-gg2z": Phase="Running", Reason="", readiness=false. Elapsed: 29.085494263s Feb 17 11:23:55.656: INFO: Pod "pod-subpath-test-secret-gg2z": Phase="Running", Reason="", readiness=false. Elapsed: 31.108448143s Feb 17 11:23:57.672: INFO: Pod "pod-subpath-test-secret-gg2z": Phase="Running", Reason="", readiness=false. Elapsed: 33.124261168s Feb 17 11:23:59.763: INFO: Pod "pod-subpath-test-secret-gg2z": Phase="Running", Reason="", readiness=false. Elapsed: 35.215220529s Feb 17 11:24:01.780: INFO: Pod "pod-subpath-test-secret-gg2z": Phase="Succeeded", Reason="", readiness=false. Elapsed: 37.232401426s STEP: Saw pod success Feb 17 11:24:01.781: INFO: Pod "pod-subpath-test-secret-gg2z" satisfied condition "success or failure" Feb 17 11:24:01.792: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-subpath-test-secret-gg2z container test-container-subpath-secret-gg2z: STEP: delete the pod Feb 17 11:24:02.528: INFO: Waiting for pod pod-subpath-test-secret-gg2z to disappear Feb 17 11:24:02.717: INFO: Pod pod-subpath-test-secret-gg2z no longer exists STEP: Deleting pod pod-subpath-test-secret-gg2z Feb 17 11:24:02.718: INFO: Deleting pod "pod-subpath-test-secret-gg2z" in namespace "e2e-tests-subpath-f2bnx" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 17 11:24:02.734: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-subpath-f2bnx" for this suite. Feb 17 11:24:08.811: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 17 11:24:08.842: INFO: namespace: e2e-tests-subpath-f2bnx, resource: bindings, ignored listing per whitelist Feb 17 11:24:09.045: INFO: namespace e2e-tests-subpath-f2bnx deletion completed in 6.2873807s • [SLOW TEST:44.754 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with secret pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 17 11:24:09.046: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Feb 17 11:24:09.217: INFO: Waiting up to 5m0s for pod "downwardapi-volume-03edf3a9-5178-11ea-a180-0242ac110008" in namespace "e2e-tests-projected-mq9gn" to be "success or failure" Feb 17 11:24:09.246: INFO: Pod "downwardapi-volume-03edf3a9-5178-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 29.194634ms Feb 17 11:24:11.830: INFO: Pod "downwardapi-volume-03edf3a9-5178-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.613166172s Feb 17 11:24:13.864: INFO: Pod "downwardapi-volume-03edf3a9-5178-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.646561784s Feb 17 11:24:15.888: INFO: Pod "downwardapi-volume-03edf3a9-5178-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.670982636s Feb 17 11:24:17.902: INFO: Pod "downwardapi-volume-03edf3a9-5178-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.684512964s Feb 17 11:24:19.913: INFO: Pod "downwardapi-volume-03edf3a9-5178-11ea-a180-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.696251977s STEP: Saw pod success Feb 17 11:24:19.914: INFO: Pod "downwardapi-volume-03edf3a9-5178-11ea-a180-0242ac110008" satisfied condition "success or failure" Feb 17 11:24:19.920: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-03edf3a9-5178-11ea-a180-0242ac110008 container client-container: STEP: delete the pod Feb 17 11:24:20.220: INFO: Waiting for pod downwardapi-volume-03edf3a9-5178-11ea-a180-0242ac110008 to disappear Feb 17 11:24:20.242: INFO: Pod downwardapi-volume-03edf3a9-5178-11ea-a180-0242ac110008 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 17 11:24:20.243: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-mq9gn" for this suite. Feb 17 11:24:26.437: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 17 11:24:26.715: INFO: namespace: e2e-tests-projected-mq9gn, resource: bindings, ignored listing per whitelist Feb 17 11:24:26.726: INFO: namespace e2e-tests-projected-mq9gn deletion completed in 6.470447954s • [SLOW TEST:17.680 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 17 11:24:26.726: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0644 on node default medium Feb 17 11:24:27.019: INFO: Waiting up to 5m0s for pod "pod-0e89d7e4-5178-11ea-a180-0242ac110008" in namespace "e2e-tests-emptydir-zssrb" to be "success or failure" Feb 17 11:24:27.040: INFO: Pod "pod-0e89d7e4-5178-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 21.136966ms Feb 17 11:24:29.053: INFO: Pod "pod-0e89d7e4-5178-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034024964s Feb 17 11:24:31.066: INFO: Pod "pod-0e89d7e4-5178-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.047266423s Feb 17 11:24:33.080: INFO: Pod "pod-0e89d7e4-5178-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.061206282s Feb 17 11:24:35.097: INFO: Pod "pod-0e89d7e4-5178-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.077988112s Feb 17 11:24:37.109: INFO: Pod "pod-0e89d7e4-5178-11ea-a180-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.090079941s STEP: Saw pod success Feb 17 11:24:37.109: INFO: Pod "pod-0e89d7e4-5178-11ea-a180-0242ac110008" satisfied condition "success or failure" Feb 17 11:24:37.113: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-0e89d7e4-5178-11ea-a180-0242ac110008 container test-container: STEP: delete the pod Feb 17 11:24:37.375: INFO: Waiting for pod pod-0e89d7e4-5178-11ea-a180-0242ac110008 to disappear Feb 17 11:24:37.544: INFO: Pod pod-0e89d7e4-5178-11ea-a180-0242ac110008 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 17 11:24:37.544: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-zssrb" for this suite. Feb 17 11:24:44.601: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 17 11:24:44.703: INFO: namespace: e2e-tests-emptydir-zssrb, resource: bindings, ignored listing per whitelist Feb 17 11:24:44.763: INFO: namespace e2e-tests-emptydir-zssrb deletion completed in 7.199710575s • [SLOW TEST:18.037 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 17 11:24:44.763: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the rs STEP: Gathering metrics W0217 11:25:15.640310 8 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Feb 17 11:25:15.640: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 17 11:25:15.640: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-djqtz" for this suite. Feb 17 11:25:23.887: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 17 11:25:24.119: INFO: namespace: e2e-tests-gc-djqtz, resource: bindings, ignored listing per whitelist Feb 17 11:25:24.228: INFO: namespace e2e-tests-gc-djqtz deletion completed in 8.576485402s • [SLOW TEST:39.465 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 17 11:25:24.229: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with secret that has name projected-secret-test-map-30ede503-5178-11ea-a180-0242ac110008 STEP: Creating a pod to test consume secrets Feb 17 11:25:24.754: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-30f05c4c-5178-11ea-a180-0242ac110008" in namespace "e2e-tests-projected-k85rq" to be "success or failure" Feb 17 11:25:24.770: INFO: Pod "pod-projected-secrets-30f05c4c-5178-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 15.79048ms Feb 17 11:25:26.942: INFO: Pod "pod-projected-secrets-30f05c4c-5178-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.188032002s Feb 17 11:25:28.968: INFO: Pod "pod-projected-secrets-30f05c4c-5178-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.213375696s Feb 17 11:25:31.171: INFO: Pod "pod-projected-secrets-30f05c4c-5178-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.417021363s Feb 17 11:25:33.210: INFO: Pod "pod-projected-secrets-30f05c4c-5178-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.455284426s Feb 17 11:25:35.233: INFO: Pod "pod-projected-secrets-30f05c4c-5178-11ea-a180-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.479136662s STEP: Saw pod success Feb 17 11:25:35.234: INFO: Pod "pod-projected-secrets-30f05c4c-5178-11ea-a180-0242ac110008" satisfied condition "success or failure" Feb 17 11:25:35.243: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-30f05c4c-5178-11ea-a180-0242ac110008 container projected-secret-volume-test: STEP: delete the pod Feb 17 11:25:35.384: INFO: Waiting for pod pod-projected-secrets-30f05c4c-5178-11ea-a180-0242ac110008 to disappear Feb 17 11:25:35.402: INFO: Pod pod-projected-secrets-30f05c4c-5178-11ea-a180-0242ac110008 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 17 11:25:35.402: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-k85rq" for this suite. Feb 17 11:25:41.546: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 17 11:25:41.719: INFO: namespace: e2e-tests-projected-k85rq, resource: bindings, ignored listing per whitelist Feb 17 11:25:41.732: INFO: namespace e2e-tests-projected-k85rq deletion completed in 6.299600763s • [SLOW TEST:17.503 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 17 11:25:41.733: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Feb 17 11:25:42.006: INFO: Waiting up to 5m0s for pod "downwardapi-volume-3b3bbefd-5178-11ea-a180-0242ac110008" in namespace "e2e-tests-projected-j7456" to be "success or failure" Feb 17 11:25:42.177: INFO: Pod "downwardapi-volume-3b3bbefd-5178-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 171.008209ms Feb 17 11:25:44.500: INFO: Pod "downwardapi-volume-3b3bbefd-5178-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.494416795s Feb 17 11:25:46.532: INFO: Pod "downwardapi-volume-3b3bbefd-5178-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.526322423s Feb 17 11:25:48.701: INFO: Pod "downwardapi-volume-3b3bbefd-5178-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.694563492s Feb 17 11:25:50.713: INFO: Pod "downwardapi-volume-3b3bbefd-5178-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.706952773s Feb 17 11:25:52.723: INFO: Pod "downwardapi-volume-3b3bbefd-5178-11ea-a180-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.717401349s STEP: Saw pod success Feb 17 11:25:52.723: INFO: Pod "downwardapi-volume-3b3bbefd-5178-11ea-a180-0242ac110008" satisfied condition "success or failure" Feb 17 11:25:52.726: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-3b3bbefd-5178-11ea-a180-0242ac110008 container client-container: STEP: delete the pod Feb 17 11:25:53.440: INFO: Waiting for pod downwardapi-volume-3b3bbefd-5178-11ea-a180-0242ac110008 to disappear Feb 17 11:25:53.455: INFO: Pod downwardapi-volume-3b3bbefd-5178-11ea-a180-0242ac110008 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 17 11:25:53.455: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-j7456" for this suite. Feb 17 11:25:59.548: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 17 11:25:59.716: INFO: namespace: e2e-tests-projected-j7456, resource: bindings, ignored listing per whitelist Feb 17 11:25:59.912: INFO: namespace e2e-tests-projected-j7456 deletion completed in 6.450383739s • [SLOW TEST:18.179 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 17 11:25:59.912: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on tmpfs should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir volume type on tmpfs Feb 17 11:26:00.261: INFO: Waiting up to 5m0s for pod "pod-461cac55-5178-11ea-a180-0242ac110008" in namespace "e2e-tests-emptydir-vp56p" to be "success or failure" Feb 17 11:26:00.278: INFO: Pod "pod-461cac55-5178-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 16.331453ms Feb 17 11:26:02.341: INFO: Pod "pod-461cac55-5178-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.079037865s Feb 17 11:26:04.365: INFO: Pod "pod-461cac55-5178-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.103260922s Feb 17 11:26:06.378: INFO: Pod "pod-461cac55-5178-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.116782539s Feb 17 11:26:08.392: INFO: Pod "pod-461cac55-5178-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.130357878s Feb 17 11:26:10.402: INFO: Pod "pod-461cac55-5178-11ea-a180-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.14050492s STEP: Saw pod success Feb 17 11:26:10.402: INFO: Pod "pod-461cac55-5178-11ea-a180-0242ac110008" satisfied condition "success or failure" Feb 17 11:26:10.412: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-461cac55-5178-11ea-a180-0242ac110008 container test-container: STEP: delete the pod Feb 17 11:26:10.588: INFO: Waiting for pod pod-461cac55-5178-11ea-a180-0242ac110008 to disappear Feb 17 11:26:10.605: INFO: Pod pod-461cac55-5178-11ea-a180-0242ac110008 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 17 11:26:10.605: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-vp56p" for this suite. Feb 17 11:26:16.829: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 17 11:26:16.925: INFO: namespace: e2e-tests-emptydir-vp56p, resource: bindings, ignored listing per whitelist Feb 17 11:26:17.008: INFO: namespace e2e-tests-emptydir-vp56p deletion completed in 6.392610691s • [SLOW TEST:17.096 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 volume on tmpfs should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-apps] Deployment deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 17 11:26:17.008: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65 [It] deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Feb 17 11:26:17.205: INFO: Pod name cleanup-pod: Found 0 pods out of 1 Feb 17 11:26:22.980: INFO: Pod name cleanup-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Feb 17 11:26:27.006: INFO: Creating deployment test-cleanup-deployment STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59 Feb 17 11:26:27.043: INFO: Deployment "test-cleanup-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment,GenerateName:,Namespace:e2e-tests-deployment-4mz2l,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-4mz2l/deployments/test-cleanup-deployment,UID:5611ab2c-5178-11ea-a994-fa163e34d433,ResourceVersion:21970134,Generation:1,CreationTimestamp:2020-02-17 11:26:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[],ReadyReplicas:0,CollisionCount:nil,},} Feb 17 11:26:27.048: INFO: New ReplicaSet of Deployment "test-cleanup-deployment" is nil. [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 17 11:26:27.053: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-deployment-4mz2l" for this suite. Feb 17 11:26:37.097: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 17 11:26:37.167: INFO: namespace: e2e-tests-deployment-4mz2l, resource: bindings, ignored listing per whitelist Feb 17 11:26:37.223: INFO: namespace e2e-tests-deployment-4mz2l deletion completed in 10.163779949s • [SLOW TEST:20.215 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 17 11:26:37.223: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65 [It] RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Feb 17 11:26:37.393: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted) Feb 17 11:26:37.407: INFO: Pod name sample-pod: Found 0 pods out of 1 Feb 17 11:26:43.013: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Feb 17 11:26:47.041: INFO: Creating deployment "test-rolling-update-deployment" Feb 17 11:26:47.060: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has Feb 17 11:26:47.274: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created Feb 17 11:26:49.298: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected Feb 17 11:26:49.309: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717535607, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717535607, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717535607, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717535607, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 17 11:26:51.338: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717535607, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717535607, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717535607, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717535607, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 17 11:26:53.324: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717535607, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717535607, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717535607, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717535607, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 17 11:26:55.327: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717535607, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717535607, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717535607, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717535607, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 17 11:26:57.327: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted) [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59 Feb 17 11:26:57.348: INFO: Deployment "test-rolling-update-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment,GenerateName:,Namespace:e2e-tests-deployment-hlqlq,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-hlqlq/deployments/test-rolling-update-deployment,UID:62026efa-5178-11ea-a994-fa163e34d433,ResourceVersion:21970230,Generation:1,CreationTimestamp:2020-02-17 11:26:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-02-17 11:26:47 +0000 UTC 2020-02-17 11:26:47 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-02-17 11:26:56 +0000 UTC 2020-02-17 11:26:47 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rolling-update-deployment-75db98fb4c" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},} Feb 17 11:26:57.354: INFO: New ReplicaSet "test-rolling-update-deployment-75db98fb4c" of Deployment "test-rolling-update-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-75db98fb4c,GenerateName:,Namespace:e2e-tests-deployment-hlqlq,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-hlqlq/replicasets/test-rolling-update-deployment-75db98fb4c,UID:622ad8fc-5178-11ea-a994-fa163e34d433,ResourceVersion:21970220,Generation:1,CreationTimestamp:2020-02-17 11:26:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment 62026efa-5178-11ea-a994-fa163e34d433 0xc00231e667 0xc00231e668}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} Feb 17 11:26:57.354: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment": Feb 17 11:26:57.354: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-controller,GenerateName:,Namespace:e2e-tests-deployment-hlqlq,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-hlqlq/replicasets/test-rolling-update-controller,UID:5c41e06c-5178-11ea-a994-fa163e34d433,ResourceVersion:21970229,Generation:2,CreationTimestamp:2020-02-17 11:26:37 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305832,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment 62026efa-5178-11ea-a994-fa163e34d433 0xc00231e58f 0xc00231e5a0}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Feb 17 11:26:57.365: INFO: Pod "test-rolling-update-deployment-75db98fb4c-7trm5" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-75db98fb4c-7trm5,GenerateName:test-rolling-update-deployment-75db98fb4c-,Namespace:e2e-tests-deployment-hlqlq,SelfLink:/api/v1/namespaces/e2e-tests-deployment-hlqlq/pods/test-rolling-update-deployment-75db98fb4c-7trm5,UID:62321c1a-5178-11ea-a994-fa163e34d433,ResourceVersion:21970219,Generation:0,CreationTimestamp:2020-02-17 11:26:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rolling-update-deployment-75db98fb4c 622ad8fc-5178-11ea-a994-fa163e34d433 0xc00151df97 0xc00151df98}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-5ldb7 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5ldb7,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-5ldb7 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001eb2330} {node.kubernetes.io/unreachable Exists NoExecute 0xc001eb2350}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-17 11:26:47 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-17 11:26:56 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-17 11:26:56 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-17 11:26:47 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.5,StartTime:2020-02-17 11:26:47 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-02-17 11:26:54 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 docker-pullable://gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 docker://efc3f141e7037e7b84efbf892b97c7defb2a8153397598479877d5cea74f30ec}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 17 11:26:57.365: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-deployment-hlqlq" for this suite. Feb 17 11:27:05.996: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 17 11:27:06.049: INFO: namespace: e2e-tests-deployment-hlqlq, resource: bindings, ignored listing per whitelist Feb 17 11:27:06.161: INFO: namespace e2e-tests-deployment-hlqlq deletion completed in 8.789381397s • [SLOW TEST:28.938 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 17 11:27:06.162: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: starting an echo server on multiple ports STEP: creating replication controller proxy-service-wnm9q in namespace e2e-tests-proxy-ftj7z I0217 11:27:06.620724 8 runners.go:184] Created replication controller with name: proxy-service-wnm9q, namespace: e2e-tests-proxy-ftj7z, replica count: 1 I0217 11:27:07.671589 8 runners.go:184] proxy-service-wnm9q Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0217 11:27:08.671955 8 runners.go:184] proxy-service-wnm9q Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0217 11:27:09.672327 8 runners.go:184] proxy-service-wnm9q Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0217 11:27:10.673132 8 runners.go:184] proxy-service-wnm9q Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0217 11:27:11.673855 8 runners.go:184] proxy-service-wnm9q Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0217 11:27:12.674378 8 runners.go:184] proxy-service-wnm9q Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0217 11:27:13.674920 8 runners.go:184] proxy-service-wnm9q Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0217 11:27:14.675579 8 runners.go:184] proxy-service-wnm9q Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0217 11:27:15.676253 8 runners.go:184] proxy-service-wnm9q Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0217 11:27:16.677131 8 runners.go:184] proxy-service-wnm9q Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0217 11:27:17.678663 8 runners.go:184] proxy-service-wnm9q Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Feb 17 11:27:17.724: INFO: setup took 11.304218838s, starting test cases STEP: running 16 cases, 20 attempts per case, 320 total attempts Feb 17 11:27:17.861: INFO: (0) /api/v1/namespaces/e2e-tests-proxy-ftj7z/pods/http:proxy-service-wnm9q-bjnxm:1080/proxy/: >> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Given a Pod with a 'name' label pod-adoption-release is created STEP: When a replicaset with a matching selector is created STEP: Then the orphan pod is adopted STEP: When the matched label of one of its pods change Feb 17 11:27:52.672: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 17 11:27:52.763: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-replicaset-q8v89" for this suite. Feb 17 11:28:20.972: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 17 11:28:21.145: INFO: namespace: e2e-tests-replicaset-q8v89, resource: bindings, ignored listing per whitelist Feb 17 11:28:21.315: INFO: namespace e2e-tests-replicaset-q8v89 deletion completed in 28.530777631s • [SLOW TEST:42.308 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl logs should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 17 11:28:21.315: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1134 STEP: creating an rc Feb 17 11:28:21.483: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-nsxd7' Feb 17 11:28:23.930: INFO: stderr: "" Feb 17 11:28:23.930: INFO: stdout: "replicationcontroller/redis-master created\n" [It] should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Waiting for Redis master to start. Feb 17 11:28:25.306: INFO: Selector matched 1 pods for map[app:redis] Feb 17 11:28:25.306: INFO: Found 0 / 1 Feb 17 11:28:26.050: INFO: Selector matched 1 pods for map[app:redis] Feb 17 11:28:26.051: INFO: Found 0 / 1 Feb 17 11:28:26.950: INFO: Selector matched 1 pods for map[app:redis] Feb 17 11:28:26.950: INFO: Found 0 / 1 Feb 17 11:28:27.977: INFO: Selector matched 1 pods for map[app:redis] Feb 17 11:28:27.977: INFO: Found 0 / 1 Feb 17 11:28:29.190: INFO: Selector matched 1 pods for map[app:redis] Feb 17 11:28:29.190: INFO: Found 0 / 1 Feb 17 11:28:29.949: INFO: Selector matched 1 pods for map[app:redis] Feb 17 11:28:29.949: INFO: Found 0 / 1 Feb 17 11:28:30.944: INFO: Selector matched 1 pods for map[app:redis] Feb 17 11:28:30.945: INFO: Found 0 / 1 Feb 17 11:28:31.950: INFO: Selector matched 1 pods for map[app:redis] Feb 17 11:28:31.950: INFO: Found 0 / 1 Feb 17 11:28:32.945: INFO: Selector matched 1 pods for map[app:redis] Feb 17 11:28:32.945: INFO: Found 1 / 1 Feb 17 11:28:32.946: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Feb 17 11:28:32.951: INFO: Selector matched 1 pods for map[app:redis] Feb 17 11:28:32.951: INFO: ForEach: Found 1 pods from the filter. Now looping through them. STEP: checking for a matching strings Feb 17 11:28:32.951: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-8j9rh redis-master --namespace=e2e-tests-kubectl-nsxd7' Feb 17 11:28:33.133: INFO: stderr: "" Feb 17 11:28:33.134: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 17 Feb 11:28:31.494 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 17 Feb 11:28:31.494 # Server started, Redis version 3.2.12\n1:M 17 Feb 11:28:31.494 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 17 Feb 11:28:31.494 * The server is now ready to accept connections on port 6379\n" STEP: limiting log lines Feb 17 11:28:33.135: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-8j9rh redis-master --namespace=e2e-tests-kubectl-nsxd7 --tail=1' Feb 17 11:28:33.276: INFO: stderr: "" Feb 17 11:28:33.276: INFO: stdout: "1:M 17 Feb 11:28:31.494 * The server is now ready to accept connections on port 6379\n" STEP: limiting log bytes Feb 17 11:28:33.276: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-8j9rh redis-master --namespace=e2e-tests-kubectl-nsxd7 --limit-bytes=1' Feb 17 11:28:33.390: INFO: stderr: "" Feb 17 11:28:33.390: INFO: stdout: " " STEP: exposing timestamps Feb 17 11:28:33.391: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-8j9rh redis-master --namespace=e2e-tests-kubectl-nsxd7 --tail=1 --timestamps' Feb 17 11:28:33.568: INFO: stderr: "" Feb 17 11:28:33.569: INFO: stdout: "2020-02-17T11:28:31.49527266Z 1:M 17 Feb 11:28:31.494 * The server is now ready to accept connections on port 6379\n" STEP: restricting to a time range Feb 17 11:28:36.070: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-8j9rh redis-master --namespace=e2e-tests-kubectl-nsxd7 --since=1s' Feb 17 11:28:36.356: INFO: stderr: "" Feb 17 11:28:36.356: INFO: stdout: "" Feb 17 11:28:36.357: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-8j9rh redis-master --namespace=e2e-tests-kubectl-nsxd7 --since=24h' Feb 17 11:28:36.524: INFO: stderr: "" Feb 17 11:28:36.524: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 17 Feb 11:28:31.494 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 17 Feb 11:28:31.494 # Server started, Redis version 3.2.12\n1:M 17 Feb 11:28:31.494 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 17 Feb 11:28:31.494 * The server is now ready to accept connections on port 6379\n" [AfterEach] [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1140 STEP: using delete to clean up resources Feb 17 11:28:36.525: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-nsxd7' Feb 17 11:28:36.655: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Feb 17 11:28:36.655: INFO: stdout: "replicationcontroller \"redis-master\" force deleted\n" Feb 17 11:28:36.656: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=nginx --no-headers --namespace=e2e-tests-kubectl-nsxd7' Feb 17 11:28:36.791: INFO: stderr: "No resources found.\n" Feb 17 11:28:36.791: INFO: stdout: "" Feb 17 11:28:36.792: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=nginx --namespace=e2e-tests-kubectl-nsxd7 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Feb 17 11:28:36.931: INFO: stderr: "" Feb 17 11:28:36.931: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 17 11:28:36.931: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-nsxd7" for this suite. Feb 17 11:29:00.995: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 17 11:29:01.097: INFO: namespace: e2e-tests-kubectl-nsxd7, resource: bindings, ignored listing per whitelist Feb 17 11:29:01.179: INFO: namespace e2e-tests-kubectl-nsxd7 deletion completed in 24.239058883s • [SLOW TEST:39.864 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 17 11:29:01.180: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: retrieving the pod Feb 17 11:29:09.496: INFO: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:send-events-b21c5474-5178-11ea-a180-0242ac110008,GenerateName:,Namespace:e2e-tests-events-qflx4,SelfLink:/api/v1/namespaces/e2e-tests-events-qflx4/pods/send-events-b21c5474-5178-11ea-a180-0242ac110008,UID:b21dd338-5178-11ea-a994-fa163e34d433,ResourceVersion:21970538,Generation:0,CreationTimestamp:2020-02-17 11:29:01 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: foo,time: 434813889,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-qvdmm {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-qvdmm,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{p gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 [] [] [{ 0 80 TCP }] [] [] {map[] map[]} [{default-token-qvdmm true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc000992d10} {node.kubernetes.io/unreachable Exists NoExecute 0xc000992d30}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-17 11:29:01 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-17 11:29:08 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-17 11:29:08 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-17 11:29:01 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.4,StartTime:2020-02-17 11:29:01 +0000 UTC,ContainerStatuses:[{p {nil ContainerStateRunning{StartedAt:2020-02-17 11:29:08 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 docker-pullable://gcr.io/kubernetes-e2e-test-images/serve-hostname@sha256:bab70473a6d8ef65a22625dc9a1b0f0452e811530fdbe77e4408523460177ff1 docker://44af5bbd08027bea3cc52ec860c9e852c5fa688cbd098931ac98ebc73198633d}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} STEP: checking for scheduler event about the pod Feb 17 11:29:11.513: INFO: Saw scheduler event for our pod. STEP: checking for kubelet event about the pod Feb 17 11:29:13.533: INFO: Saw kubelet event for our pod. STEP: deleting the pod [AfterEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 17 11:29:13.565: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-events-qflx4" for this suite. Feb 17 11:29:53.752: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 17 11:29:53.856: INFO: namespace: e2e-tests-events-qflx4, resource: bindings, ignored listing per whitelist Feb 17 11:29:54.203: INFO: namespace e2e-tests-events-qflx4 deletion completed in 40.618861426s • [SLOW TEST:53.023 seconds] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-cli] Kubectl client [k8s.io] Proxy server should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 17 11:29:54.204: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Starting the proxy Feb 17 11:29:54.421: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix810959919/test' STEP: retrieving proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 17 11:29:54.500: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-zqwnb" for this suite. Feb 17 11:30:00.665: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 17 11:30:00.692: INFO: namespace: e2e-tests-kubectl-zqwnb, resource: bindings, ignored listing per whitelist Feb 17 11:30:00.773: INFO: namespace e2e-tests-kubectl-zqwnb deletion completed in 6.25715308s • [SLOW TEST:6.570 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Proxy server /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 17 11:30:00.774: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating pod Feb 17 11:30:10.993: INFO: Pod pod-hostip-d5948b2a-5178-11ea-a180-0242ac110008 has hostIP: 10.96.1.240 [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 17 11:30:10.993: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-q77l8" for this suite. Feb 17 11:30:49.057: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 17 11:30:49.208: INFO: namespace: e2e-tests-pods-q77l8, resource: bindings, ignored listing per whitelist Feb 17 11:30:49.322: INFO: namespace e2e-tests-pods-q77l8 deletion completed in 38.321116798s • [SLOW TEST:48.549 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 17 11:30:49.323: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should not write to root filesystem [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 17 11:30:59.837: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubelet-test-xxgjw" for this suite. Feb 17 11:31:45.970: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 17 11:31:45.998: INFO: namespace: e2e-tests-kubelet-test-xxgjw, resource: bindings, ignored listing per whitelist Feb 17 11:31:46.128: INFO: namespace e2e-tests-kubelet-test-xxgjw deletion completed in 46.275843132s • [SLOW TEST:56.806 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when scheduling a read only busybox container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:186 should not write to root filesystem [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-apps] Deployment deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 17 11:31:46.129: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65 [It] deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Feb 17 11:31:46.600: INFO: Creating deployment "nginx-deployment" Feb 17 11:31:46.609: INFO: Waiting for observed generation 1 Feb 17 11:31:52.013: INFO: Waiting for all required pods to come up Feb 17 11:31:52.072: INFO: Pod name nginx: Found 10 pods out of 10 STEP: ensuring each pod is running Feb 17 11:32:24.484: INFO: Waiting for deployment "nginx-deployment" to complete Feb 17 11:32:24.500: INFO: Updating deployment "nginx-deployment" with a non-existent image Feb 17 11:32:24.525: INFO: Updating deployment nginx-deployment Feb 17 11:32:24.525: INFO: Waiting for observed generation 2 Feb 17 11:32:27.471: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 Feb 17 11:32:27.480: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 Feb 17 11:32:28.216: INFO: Waiting for the first rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas Feb 17 11:32:29.145: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 Feb 17 11:32:29.145: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 Feb 17 11:32:29.649: INFO: Waiting for the second rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas Feb 17 11:32:29.966: INFO: Verifying that deployment "nginx-deployment" has minimum required number of available replicas Feb 17 11:32:29.967: INFO: Scaling up the deployment "nginx-deployment" from 10 to 30 Feb 17 11:32:30.429: INFO: Updating deployment nginx-deployment Feb 17 11:32:30.429: INFO: Waiting for the replicasets of deployment "nginx-deployment" to have desired number of replicas Feb 17 11:32:32.398: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 Feb 17 11:32:32.799: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59 Feb 17 11:32:33.514: INFO: Deployment "nginx-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment,GenerateName:,Namespace:e2e-tests-deployment-kdqff,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-kdqff/deployments/nginx-deployment,UID:148f0d3e-5179-11ea-a994-fa163e34d433,ResourceVersion:21971021,Generation:3,CreationTimestamp:2020-02-17 11:31:46 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*30,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:13,UpdatedReplicas:5,AvailableReplicas:8,UnavailableReplicas:5,Conditions:[{Progressing True 2020-02-17 11:32:25 +0000 UTC 2020-02-17 11:31:46 +0000 UTC ReplicaSetUpdated ReplicaSet "nginx-deployment-5c98f8fb5" is progressing.} {Available False 2020-02-17 11:32:32 +0000 UTC 2020-02-17 11:32:32 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.}],ReadyReplicas:8,CollisionCount:nil,},} Feb 17 11:32:33.535: INFO: New ReplicaSet "nginx-deployment-5c98f8fb5" of Deployment "nginx-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5,GenerateName:,Namespace:e2e-tests-deployment-kdqff,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-kdqff/replicasets/nginx-deployment-5c98f8fb5,UID:2b2ae23c-5179-11ea-a994-fa163e34d433,ResourceVersion:21971019,Generation:3,CreationTimestamp:2020-02-17 11:32:24 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment nginx-deployment 148f0d3e-5179-11ea-a994-fa163e34d433 0xc001b0d5f7 0xc001b0d5f8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*13,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:5,FullyLabeledReplicas:5,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Feb 17 11:32:33.535: INFO: All old ReplicaSets of Deployment "nginx-deployment": Feb 17 11:32:33.535: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d,GenerateName:,Namespace:e2e-tests-deployment-kdqff,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-kdqff/replicasets/nginx-deployment-85ddf47c5d,UID:1491ec6c-5179-11ea-a994-fa163e34d433,ResourceVersion:21971017,Generation:3,CreationTimestamp:2020-02-17 11:31:46 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment nginx-deployment 148f0d3e-5179-11ea-a994-fa163e34d433 0xc001b0d6b7 0xc001b0d6b8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*20,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:8,FullyLabeledReplicas:8,ObservedGeneration:2,ReadyReplicas:8,AvailableReplicas:8,Conditions:[],},} Feb 17 11:32:33.779: INFO: Pod "nginx-deployment-5c98f8fb5-5xj9n" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-5xj9n,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-kdqff,SelfLink:/api/v1/namespaces/e2e-tests-deployment-kdqff/pods/nginx-deployment-5c98f8fb5-5xj9n,UID:2b67fb06-5179-11ea-a994-fa163e34d433,ResourceVersion:21971009,Generation:0,CreationTimestamp:2020-02-17 11:32:24 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 2b2ae23c-5179-11ea-a994-fa163e34d433 0xc001962067 0xc001962068}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-x6qqd {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-x6qqd,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-x6qqd true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001962180} {node.kubernetes.io/unreachable Exists NoExecute 0xc001962220}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-17 11:32:25 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-17 11:32:25 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-17 11:32:25 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-17 11:32:24 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2020-02-17 11:32:25 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 17 11:32:33.780: INFO: Pod "nginx-deployment-5c98f8fb5-8z8lr" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-8z8lr,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-kdqff,SelfLink:/api/v1/namespaces/e2e-tests-deployment-kdqff/pods/nginx-deployment-5c98f8fb5-8z8lr,UID:2b3da6c8-5179-11ea-a994-fa163e34d433,ResourceVersion:21971000,Generation:0,CreationTimestamp:2020-02-17 11:32:24 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 2b2ae23c-5179-11ea-a994-fa163e34d433 0xc0019622e7 0xc0019622e8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-x6qqd {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-x6qqd,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-x6qqd true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001962360} {node.kubernetes.io/unreachable Exists NoExecute 0xc0019623c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-17 11:32:24 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-17 11:32:24 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-17 11:32:24 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-17 11:32:24 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2020-02-17 11:32:24 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 17 11:32:33.780: INFO: Pod "nginx-deployment-5c98f8fb5-c9ncv" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-c9ncv,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-kdqff,SelfLink:/api/v1/namespaces/e2e-tests-deployment-kdqff/pods/nginx-deployment-5c98f8fb5-c9ncv,UID:2b427dc4-5179-11ea-a994-fa163e34d433,ResourceVersion:21971003,Generation:0,CreationTimestamp:2020-02-17 11:32:24 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 2b2ae23c-5179-11ea-a994-fa163e34d433 0xc001962487 0xc001962488}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-x6qqd {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-x6qqd,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-x6qqd true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001962520} {node.kubernetes.io/unreachable Exists NoExecute 0xc001962540}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-17 11:32:25 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-17 11:32:25 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-17 11:32:25 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-17 11:32:24 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2020-02-17 11:32:25 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 17 11:32:33.781: INFO: Pod "nginx-deployment-5c98f8fb5-fdk78" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-fdk78,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-kdqff,SelfLink:/api/v1/namespaces/e2e-tests-deployment-kdqff/pods/nginx-deployment-5c98f8fb5-fdk78,UID:2b746606-5179-11ea-a994-fa163e34d433,ResourceVersion:21971016,Generation:0,CreationTimestamp:2020-02-17 11:32:25 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 2b2ae23c-5179-11ea-a994-fa163e34d433 0xc001962607 0xc001962608}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-x6qqd {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-x6qqd,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-x6qqd true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001962680} {node.kubernetes.io/unreachable Exists NoExecute 0xc001962720}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-17 11:32:25 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-17 11:32:25 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-17 11:32:25 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-17 11:32:25 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2020-02-17 11:32:25 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 17 11:32:33.782: INFO: Pod "nginx-deployment-5c98f8fb5-fm26d" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-fm26d,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-kdqff,SelfLink:/api/v1/namespaces/e2e-tests-deployment-kdqff/pods/nginx-deployment-5c98f8fb5-fm26d,UID:2b42a9a6-5179-11ea-a994-fa163e34d433,ResourceVersion:21971006,Generation:0,CreationTimestamp:2020-02-17 11:32:24 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 2b2ae23c-5179-11ea-a994-fa163e34d433 0xc0019627e7 0xc0019627e8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-x6qqd {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-x6qqd,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-x6qqd true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001962b20} {node.kubernetes.io/unreachable Exists NoExecute 0xc001962b40}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-17 11:32:25 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-17 11:32:25 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-17 11:32:25 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-17 11:32:24 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2020-02-17 11:32:25 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 17 11:32:33.783: INFO: Pod "nginx-deployment-5c98f8fb5-nb8dw" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-nb8dw,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-kdqff,SelfLink:/api/v1/namespaces/e2e-tests-deployment-kdqff/pods/nginx-deployment-5c98f8fb5-nb8dw,UID:3061624b-5179-11ea-a994-fa163e34d433,ResourceVersion:21971033,Generation:0,CreationTimestamp:2020-02-17 11:32:33 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 2b2ae23c-5179-11ea-a994-fa163e34d433 0xc001962c17 0xc001962c18}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-x6qqd {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-x6qqd,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-x6qqd true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001962e40} {node.kubernetes.io/unreachable Exists NoExecute 0xc001962e60}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-17 11:32:33 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 17 11:32:33.783: INFO: Pod "nginx-deployment-5c98f8fb5-r9zj9" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-r9zj9,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-kdqff,SelfLink:/api/v1/namespaces/e2e-tests-deployment-kdqff/pods/nginx-deployment-5c98f8fb5-r9zj9,UID:30837f2d-5179-11ea-a994-fa163e34d433,ResourceVersion:21971034,Generation:0,CreationTimestamp:2020-02-17 11:32:33 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 2b2ae23c-5179-11ea-a994-fa163e34d433 0xc001962ed7 0xc001962ed8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-x6qqd {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-x6qqd,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-x6qqd true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001963200} {node.kubernetes.io/unreachable Exists NoExecute 0xc001963220}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 17 11:32:33.783: INFO: Pod "nginx-deployment-5c98f8fb5-xcw8m" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-xcw8m,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-kdqff,SelfLink:/api/v1/namespaces/e2e-tests-deployment-kdqff/pods/nginx-deployment-5c98f8fb5-xcw8m,UID:30830d2b-5179-11ea-a994-fa163e34d433,ResourceVersion:21971031,Generation:0,CreationTimestamp:2020-02-17 11:32:33 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 2b2ae23c-5179-11ea-a994-fa163e34d433 0xc001963680 0xc001963681}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-x6qqd {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-x6qqd,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-x6qqd true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0019636f0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001963710}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 17 11:32:33.784: INFO: Pod "nginx-deployment-85ddf47c5d-5st8s" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-5st8s,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-kdqff,SelfLink:/api/v1/namespaces/e2e-tests-deployment-kdqff/pods/nginx-deployment-85ddf47c5d-5st8s,UID:14c0d3bf-5179-11ea-a994-fa163e34d433,ResourceVersion:21970931,Generation:0,CreationTimestamp:2020-02-17 11:31:46 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 1491ec6c-5179-11ea-a994-fa163e34d433 0xc001963a00 0xc001963a01}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-x6qqd {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-x6qqd,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-x6qqd true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001963a60} {node.kubernetes.io/unreachable Exists NoExecute 0xc001963a80}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-17 11:31:48 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-17 11:32:19 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-17 11:32:19 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-17 11:31:47 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.13,StartTime:2020-02-17 11:31:48 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-17 11:32:18 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://a176876fd2aa569e31fe0aeb59b65acef8269e5731bd25c49afea44cca822224}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 17 11:32:33.784: INFO: Pod "nginx-deployment-85ddf47c5d-6drfv" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-6drfv,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-kdqff,SelfLink:/api/v1/namespaces/e2e-tests-deployment-kdqff/pods/nginx-deployment-85ddf47c5d-6drfv,UID:14a70a50-5179-11ea-a994-fa163e34d433,ResourceVersion:21970918,Generation:0,CreationTimestamp:2020-02-17 11:31:46 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 1491ec6c-5179-11ea-a994-fa163e34d433 0xc001963d07 0xc001963d08}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-x6qqd {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-x6qqd,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-x6qqd true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001963d70} {node.kubernetes.io/unreachable Exists NoExecute 0xc001963d90}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-17 11:31:46 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-17 11:32:18 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-17 11:32:18 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-17 11:31:46 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.5,StartTime:2020-02-17 11:31:46 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-17 11:32:13 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://47e5ddd9646c5f37d33fefbedc485eed9e7cfcd92fc10ddfb2c82d81b825f3ec}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 17 11:32:33.784: INFO: Pod "nginx-deployment-85ddf47c5d-6zxn2" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-6zxn2,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-kdqff,SelfLink:/api/v1/namespaces/e2e-tests-deployment-kdqff/pods/nginx-deployment-85ddf47c5d-6zxn2,UID:14961bcb-5179-11ea-a994-fa163e34d433,ResourceVersion:21970920,Generation:0,CreationTimestamp:2020-02-17 11:31:46 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 1491ec6c-5179-11ea-a994-fa163e34d433 0xc001b5a087 0xc001b5a088}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-x6qqd {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-x6qqd,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-x6qqd true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001b5a100} {node.kubernetes.io/unreachable Exists NoExecute 0xc001b5a120}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-17 11:31:46 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-17 11:32:18 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-17 11:32:18 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-17 11:31:46 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.4,StartTime:2020-02-17 11:31:46 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-17 11:32:13 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://fbf9f2f0ab52f32d45c4da4131d0821b79fc1d39c303ef6046fd3559408bec1a}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 17 11:32:33.785: INFO: Pod "nginx-deployment-85ddf47c5d-bs8k2" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-bs8k2,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-kdqff,SelfLink:/api/v1/namespaces/e2e-tests-deployment-kdqff/pods/nginx-deployment-85ddf47c5d-bs8k2,UID:305ec86b-5179-11ea-a994-fa163e34d433,ResourceVersion:21971025,Generation:0,CreationTimestamp:2020-02-17 11:32:33 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 1491ec6c-5179-11ea-a994-fa163e34d433 0xc001b5a327 0xc001b5a328}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-x6qqd {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-x6qqd,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-x6qqd true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001b5a3a0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001b5a3c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-17 11:32:33 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 17 11:32:33.785: INFO: Pod "nginx-deployment-85ddf47c5d-dk4m4" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-dk4m4,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-kdqff,SelfLink:/api/v1/namespaces/e2e-tests-deployment-kdqff/pods/nginx-deployment-85ddf47c5d-dk4m4,UID:14c0f1d0-5179-11ea-a994-fa163e34d433,ResourceVersion:21970924,Generation:0,CreationTimestamp:2020-02-17 11:31:46 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 1491ec6c-5179-11ea-a994-fa163e34d433 0xc001b5a577 0xc001b5a578}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-x6qqd {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-x6qqd,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-x6qqd true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001b5a5e0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001b5a610}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-17 11:31:47 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-17 11:32:18 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-17 11:32:18 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-17 11:31:47 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.11,StartTime:2020-02-17 11:31:47 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-17 11:32:18 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://e0d0fea0a6ab5c1103562a35fe458475530ec81fe7515a4c03681eee34a80b5c}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 17 11:32:33.785: INFO: Pod "nginx-deployment-85ddf47c5d-gk6z6" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-gk6z6,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-kdqff,SelfLink:/api/v1/namespaces/e2e-tests-deployment-kdqff/pods/nginx-deployment-85ddf47c5d-gk6z6,UID:14ae0b18-5179-11ea-a994-fa163e34d433,ResourceVersion:21970935,Generation:0,CreationTimestamp:2020-02-17 11:31:46 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 1491ec6c-5179-11ea-a994-fa163e34d433 0xc001b5a7b7 0xc001b5a7b8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-x6qqd {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-x6qqd,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-x6qqd true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001b5a830} {node.kubernetes.io/unreachable Exists NoExecute 0xc001b5a850}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-17 11:31:47 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-17 11:32:19 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-17 11:32:19 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-17 11:31:46 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.10,StartTime:2020-02-17 11:31:47 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-17 11:32:18 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://426a3f8f6371020fc66712c0b821560484ab8dd933dc7c3c4377a8e8aa5fbfff}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 17 11:32:33.786: INFO: Pod "nginx-deployment-85ddf47c5d-hkdjs" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-hkdjs,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-kdqff,SelfLink:/api/v1/namespaces/e2e-tests-deployment-kdqff/pods/nginx-deployment-85ddf47c5d-hkdjs,UID:30839523-5179-11ea-a994-fa163e34d433,ResourceVersion:21971038,Generation:0,CreationTimestamp:2020-02-17 11:32:33 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 1491ec6c-5179-11ea-a994-fa163e34d433 0xc001b5a9d7 0xc001b5a9d8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-x6qqd {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-x6qqd,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-x6qqd true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001b5aa60} {node.kubernetes.io/unreachable Exists NoExecute 0xc001b5aa80}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 17 11:32:33.786: INFO: Pod "nginx-deployment-85ddf47c5d-nswxz" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-nswxz,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-kdqff,SelfLink:/api/v1/namespaces/e2e-tests-deployment-kdqff/pods/nginx-deployment-85ddf47c5d-nswxz,UID:14adf649-5179-11ea-a994-fa163e34d433,ResourceVersion:21970906,Generation:0,CreationTimestamp:2020-02-17 11:31:46 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 1491ec6c-5179-11ea-a994-fa163e34d433 0xc001b5aae0 0xc001b5aae1}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-x6qqd {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-x6qqd,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-x6qqd true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001b5abb0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001b5abd0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-17 11:31:47 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-17 11:32:16 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-17 11:32:16 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-17 11:31:46 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.6,StartTime:2020-02-17 11:31:47 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-17 11:32:13 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://cc726da1492a32636ff36323447245588107651560d25f1a1ae6b4d45d946dee}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 17 11:32:33.786: INFO: Pod "nginx-deployment-85ddf47c5d-pcglv" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-pcglv,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-kdqff,SelfLink:/api/v1/namespaces/e2e-tests-deployment-kdqff/pods/nginx-deployment-85ddf47c5d-pcglv,UID:14c08d57-5179-11ea-a994-fa163e34d433,ResourceVersion:21970943,Generation:0,CreationTimestamp:2020-02-17 11:31:46 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 1491ec6c-5179-11ea-a994-fa163e34d433 0xc001b5ad67 0xc001b5ad68}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-x6qqd {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-x6qqd,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-x6qqd true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001b5ae10} {node.kubernetes.io/unreachable Exists NoExecute 0xc001b5ae30}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-17 11:31:47 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-17 11:32:19 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-17 11:32:19 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-17 11:31:47 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.7,StartTime:2020-02-17 11:31:47 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-17 11:32:16 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://b9771582c1bf76e81892cb275fa8ec74b5697ea43ed2407364f577d03cb68908}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 17 11:32:33.787: INFO: Pod "nginx-deployment-85ddf47c5d-r47gw" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-r47gw,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-kdqff,SelfLink:/api/v1/namespaces/e2e-tests-deployment-kdqff/pods/nginx-deployment-85ddf47c5d-r47gw,UID:3084df66-5179-11ea-a994-fa163e34d433,ResourceVersion:21971036,Generation:0,CreationTimestamp:2020-02-17 11:32:33 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 1491ec6c-5179-11ea-a994-fa163e34d433 0xc001b5afd7 0xc001b5afd8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-x6qqd {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-x6qqd,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-x6qqd true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001b5b070} {node.kubernetes.io/unreachable Exists NoExecute 0xc001b5b090}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 17 11:32:33.787: INFO: Pod "nginx-deployment-85ddf47c5d-r7cxj" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-r7cxj,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-kdqff,SelfLink:/api/v1/namespaces/e2e-tests-deployment-kdqff/pods/nginx-deployment-85ddf47c5d-r7cxj,UID:3060e059-5179-11ea-a994-fa163e34d433,ResourceVersion:21971030,Generation:0,CreationTimestamp:2020-02-17 11:32:33 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 1491ec6c-5179-11ea-a994-fa163e34d433 0xc001b5b120 0xc001b5b121}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-x6qqd {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-x6qqd,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-x6qqd true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001b5b1a0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001b5b1e0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-17 11:32:33 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 17 11:32:33.787: INFO: Pod "nginx-deployment-85ddf47c5d-vh9rv" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-vh9rv,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-kdqff,SelfLink:/api/v1/namespaces/e2e-tests-deployment-kdqff/pods/nginx-deployment-85ddf47c5d-vh9rv,UID:3084323d-5179-11ea-a994-fa163e34d433,ResourceVersion:21971035,Generation:0,CreationTimestamp:2020-02-17 11:32:33 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 1491ec6c-5179-11ea-a994-fa163e34d433 0xc001b5b257 0xc001b5b258}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-x6qqd {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-x6qqd,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-x6qqd true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001b5b2f0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001b5b320}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 17 11:32:33.787: INFO: Pod "nginx-deployment-85ddf47c5d-wdfhq" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-wdfhq,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-kdqff,SelfLink:/api/v1/namespaces/e2e-tests-deployment-kdqff/pods/nginx-deployment-85ddf47c5d-wdfhq,UID:30839836-5179-11ea-a994-fa163e34d433,ResourceVersion:21971037,Generation:0,CreationTimestamp:2020-02-17 11:32:33 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 1491ec6c-5179-11ea-a994-fa163e34d433 0xc001b5b380 0xc001b5b381}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-x6qqd {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-x6qqd,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-x6qqd true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001b5b3f0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001b5b460}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 17 11:32:33.788: INFO: Pod "nginx-deployment-85ddf47c5d-x52j4" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-x52j4,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-kdqff,SelfLink:/api/v1/namespaces/e2e-tests-deployment-kdqff/pods/nginx-deployment-85ddf47c5d-x52j4,UID:14ad8538-5179-11ea-a994-fa163e34d433,ResourceVersion:21970939,Generation:0,CreationTimestamp:2020-02-17 11:31:46 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 1491ec6c-5179-11ea-a994-fa163e34d433 0xc001b5b530 0xc001b5b531}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-x6qqd {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-x6qqd,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-x6qqd true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001b5b5a0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001b5b5d0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-17 11:31:47 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-17 11:32:19 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-17 11:32:19 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-17 11:31:46 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.8,StartTime:2020-02-17 11:31:47 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-17 11:32:16 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://1482174d58b29d672436b8ebac9f60d424ec9ae540716559e3ac6919116ecb92}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Feb 17 11:32:33.788: INFO: Pod "nginx-deployment-85ddf47c5d-z6g7m" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-z6g7m,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-kdqff,SelfLink:/api/v1/namespaces/e2e-tests-deployment-kdqff/pods/nginx-deployment-85ddf47c5d-z6g7m,UID:306162fc-5179-11ea-a994-fa163e34d433,ResourceVersion:21971032,Generation:0,CreationTimestamp:2020-02-17 11:32:33 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 1491ec6c-5179-11ea-a994-fa163e34d433 0xc001b5b727 0xc001b5b728}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-x6qqd {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-x6qqd,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-x6qqd true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001b5b7b0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001b5b7f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-17 11:32:33 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 17 11:32:33.788: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-deployment-kdqff" for this suite. Feb 17 11:33:26.530: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 17 11:33:27.289: INFO: namespace: e2e-tests-deployment-kdqff, resource: bindings, ignored listing per whitelist Feb 17 11:33:27.309: INFO: namespace e2e-tests-deployment-kdqff deletion completed in 53.250741836s • [SLOW TEST:101.180 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 17 11:33:27.309: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating the pod Feb 17 11:33:55.238: INFO: Successfully updated pod "annotationupdate510939c8-5179-11ea-a180-0242ac110008" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 17 11:33:57.321: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-c7dt5" for this suite. Feb 17 11:34:21.411: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 17 11:34:21.521: INFO: namespace: e2e-tests-projected-c7dt5, resource: bindings, ignored listing per whitelist Feb 17 11:34:21.560: INFO: namespace e2e-tests-projected-c7dt5 deletion completed in 24.215667348s • [SLOW TEST:54.251 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run pod should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 17 11:34:21.560: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1527 [It] should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine Feb 17 11:34:21.801: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=e2e-tests-kubectl-jsn7l' Feb 17 11:34:22.035: INFO: stderr: "" Feb 17 11:34:22.036: INFO: stdout: "pod/e2e-test-nginx-pod created\n" STEP: verifying the pod e2e-test-nginx-pod was created [AfterEach] [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1532 Feb 17 11:34:22.048: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=e2e-tests-kubectl-jsn7l' Feb 17 11:34:27.101: INFO: stderr: "" Feb 17 11:34:27.101: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 17 11:34:27.101: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-jsn7l" for this suite. Feb 17 11:34:33.264: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 17 11:34:33.378: INFO: namespace: e2e-tests-kubectl-jsn7l, resource: bindings, ignored listing per whitelist Feb 17 11:34:33.431: INFO: namespace e2e-tests-kubectl-jsn7l deletion completed in 6.25828667s • [SLOW TEST:11.871 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 17 11:34:33.432: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Feb 17 11:34:34.324: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"7826259b-5179-11ea-a994-fa163e34d433", Controller:(*bool)(0xc00230ae8a), BlockOwnerDeletion:(*bool)(0xc00230ae8b)}} Feb 17 11:34:34.452: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"781ffa5c-5179-11ea-a994-fa163e34d433", Controller:(*bool)(0xc001c83732), BlockOwnerDeletion:(*bool)(0xc001c83733)}} Feb 17 11:34:34.478: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"78215287-5179-11ea-a994-fa163e34d433", Controller:(*bool)(0xc00151d832), BlockOwnerDeletion:(*bool)(0xc00151d833)}} [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 17 11:34:39.532: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-lqfcq" for this suite. Feb 17 11:34:45.676: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 17 11:34:45.790: INFO: namespace: e2e-tests-gc-lqfcq, resource: bindings, ignored listing per whitelist Feb 17 11:34:45.817: INFO: namespace e2e-tests-gc-lqfcq deletion completed in 6.273318398s • [SLOW TEST:12.385 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 17 11:34:45.817: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-7f8a5250-5179-11ea-a180-0242ac110008 STEP: Creating a pod to test consume configMaps Feb 17 11:34:46.119: INFO: Waiting up to 5m0s for pod "pod-configmaps-7f8c1f02-5179-11ea-a180-0242ac110008" in namespace "e2e-tests-configmap-sbbs2" to be "success or failure" Feb 17 11:34:46.139: INFO: Pod "pod-configmaps-7f8c1f02-5179-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 19.430836ms Feb 17 11:34:48.151: INFO: Pod "pod-configmaps-7f8c1f02-5179-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031661769s Feb 17 11:34:50.162: INFO: Pod "pod-configmaps-7f8c1f02-5179-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.042994438s Feb 17 11:34:52.182: INFO: Pod "pod-configmaps-7f8c1f02-5179-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.062187564s Feb 17 11:34:54.202: INFO: Pod "pod-configmaps-7f8c1f02-5179-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.082436788s Feb 17 11:34:56.498: INFO: Pod "pod-configmaps-7f8c1f02-5179-11ea-a180-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.378906437s STEP: Saw pod success Feb 17 11:34:56.499: INFO: Pod "pod-configmaps-7f8c1f02-5179-11ea-a180-0242ac110008" satisfied condition "success or failure" Feb 17 11:34:56.673: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-7f8c1f02-5179-11ea-a180-0242ac110008 container configmap-volume-test: STEP: delete the pod Feb 17 11:34:56.744: INFO: Waiting for pod pod-configmaps-7f8c1f02-5179-11ea-a180-0242ac110008 to disappear Feb 17 11:34:56.869: INFO: Pod pod-configmaps-7f8c1f02-5179-11ea-a180-0242ac110008 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 17 11:34:56.870: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-sbbs2" for this suite. Feb 17 11:35:02.937: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 17 11:35:02.977: INFO: namespace: e2e-tests-configmap-sbbs2, resource: bindings, ignored listing per whitelist Feb 17 11:35:03.092: INFO: namespace e2e-tests-configmap-sbbs2 deletion completed in 6.203764216s • [SLOW TEST:17.274 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 17 11:35:03.092: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: getting the auto-created API token STEP: Creating a pod to test consume service account token Feb 17 11:35:03.947: INFO: Waiting up to 5m0s for pod "pod-service-account-8a28a010-5179-11ea-a180-0242ac110008-ctntn" in namespace "e2e-tests-svcaccounts-xkzsl" to be "success or failure" Feb 17 11:35:04.065: INFO: Pod "pod-service-account-8a28a010-5179-11ea-a180-0242ac110008-ctntn": Phase="Pending", Reason="", readiness=false. Elapsed: 117.996521ms Feb 17 11:35:06.086: INFO: Pod "pod-service-account-8a28a010-5179-11ea-a180-0242ac110008-ctntn": Phase="Pending", Reason="", readiness=false. Elapsed: 2.139084606s Feb 17 11:35:08.103: INFO: Pod "pod-service-account-8a28a010-5179-11ea-a180-0242ac110008-ctntn": Phase="Pending", Reason="", readiness=false. Elapsed: 4.155768029s Feb 17 11:35:10.336: INFO: Pod "pod-service-account-8a28a010-5179-11ea-a180-0242ac110008-ctntn": Phase="Pending", Reason="", readiness=false. Elapsed: 6.389073289s Feb 17 11:35:12.360: INFO: Pod "pod-service-account-8a28a010-5179-11ea-a180-0242ac110008-ctntn": Phase="Pending", Reason="", readiness=false. Elapsed: 8.412807029s Feb 17 11:35:14.562: INFO: Pod "pod-service-account-8a28a010-5179-11ea-a180-0242ac110008-ctntn": Phase="Pending", Reason="", readiness=false. Elapsed: 10.615193404s Feb 17 11:35:16.599: INFO: Pod "pod-service-account-8a28a010-5179-11ea-a180-0242ac110008-ctntn": Phase="Pending", Reason="", readiness=false. Elapsed: 12.651354407s Feb 17 11:35:18.644: INFO: Pod "pod-service-account-8a28a010-5179-11ea-a180-0242ac110008-ctntn": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.696602007s STEP: Saw pod success Feb 17 11:35:18.644: INFO: Pod "pod-service-account-8a28a010-5179-11ea-a180-0242ac110008-ctntn" satisfied condition "success or failure" Feb 17 11:35:18.654: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-service-account-8a28a010-5179-11ea-a180-0242ac110008-ctntn container token-test: STEP: delete the pod Feb 17 11:35:18.800: INFO: Waiting for pod pod-service-account-8a28a010-5179-11ea-a180-0242ac110008-ctntn to disappear Feb 17 11:35:18.807: INFO: Pod pod-service-account-8a28a010-5179-11ea-a180-0242ac110008-ctntn no longer exists STEP: Creating a pod to test consume service account root CA Feb 17 11:35:18.833: INFO: Waiting up to 5m0s for pod "pod-service-account-8a28a010-5179-11ea-a180-0242ac110008-gdgkb" in namespace "e2e-tests-svcaccounts-xkzsl" to be "success or failure" Feb 17 11:35:18.942: INFO: Pod "pod-service-account-8a28a010-5179-11ea-a180-0242ac110008-gdgkb": Phase="Pending", Reason="", readiness=false. Elapsed: 109.311384ms Feb 17 11:35:20.959: INFO: Pod "pod-service-account-8a28a010-5179-11ea-a180-0242ac110008-gdgkb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.126115609s Feb 17 11:35:22.971: INFO: Pod "pod-service-account-8a28a010-5179-11ea-a180-0242ac110008-gdgkb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.137666126s Feb 17 11:35:25.008: INFO: Pod "pod-service-account-8a28a010-5179-11ea-a180-0242ac110008-gdgkb": Phase="Pending", Reason="", readiness=false. Elapsed: 6.17506329s Feb 17 11:35:27.736: INFO: Pod "pod-service-account-8a28a010-5179-11ea-a180-0242ac110008-gdgkb": Phase="Pending", Reason="", readiness=false. Elapsed: 8.902805819s Feb 17 11:35:29.766: INFO: Pod "pod-service-account-8a28a010-5179-11ea-a180-0242ac110008-gdgkb": Phase="Pending", Reason="", readiness=false. Elapsed: 10.932732897s Feb 17 11:35:31.777: INFO: Pod "pod-service-account-8a28a010-5179-11ea-a180-0242ac110008-gdgkb": Phase="Pending", Reason="", readiness=false. Elapsed: 12.944148879s Feb 17 11:35:33.801: INFO: Pod "pod-service-account-8a28a010-5179-11ea-a180-0242ac110008-gdgkb": Phase="Pending", Reason="", readiness=false. Elapsed: 14.967662558s Feb 17 11:35:35.817: INFO: Pod "pod-service-account-8a28a010-5179-11ea-a180-0242ac110008-gdgkb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.983922367s STEP: Saw pod success Feb 17 11:35:35.817: INFO: Pod "pod-service-account-8a28a010-5179-11ea-a180-0242ac110008-gdgkb" satisfied condition "success or failure" Feb 17 11:35:35.824: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-service-account-8a28a010-5179-11ea-a180-0242ac110008-gdgkb container root-ca-test: STEP: delete the pod Feb 17 11:35:36.479: INFO: Waiting for pod pod-service-account-8a28a010-5179-11ea-a180-0242ac110008-gdgkb to disappear Feb 17 11:35:36.496: INFO: Pod pod-service-account-8a28a010-5179-11ea-a180-0242ac110008-gdgkb no longer exists STEP: Creating a pod to test consume service account namespace Feb 17 11:35:36.639: INFO: Waiting up to 5m0s for pod "pod-service-account-8a28a010-5179-11ea-a180-0242ac110008-86qn4" in namespace "e2e-tests-svcaccounts-xkzsl" to be "success or failure" Feb 17 11:35:36.663: INFO: Pod "pod-service-account-8a28a010-5179-11ea-a180-0242ac110008-86qn4": Phase="Pending", Reason="", readiness=false. Elapsed: 23.410223ms Feb 17 11:35:38.688: INFO: Pod "pod-service-account-8a28a010-5179-11ea-a180-0242ac110008-86qn4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.048816349s Feb 17 11:35:40.730: INFO: Pod "pod-service-account-8a28a010-5179-11ea-a180-0242ac110008-86qn4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.09068361s Feb 17 11:35:42.743: INFO: Pod "pod-service-account-8a28a010-5179-11ea-a180-0242ac110008-86qn4": Phase="Pending", Reason="", readiness=false. Elapsed: 6.103312205s Feb 17 11:35:44.761: INFO: Pod "pod-service-account-8a28a010-5179-11ea-a180-0242ac110008-86qn4": Phase="Pending", Reason="", readiness=false. Elapsed: 8.122020426s Feb 17 11:35:46.881: INFO: Pod "pod-service-account-8a28a010-5179-11ea-a180-0242ac110008-86qn4": Phase="Pending", Reason="", readiness=false. Elapsed: 10.241705405s Feb 17 11:35:48.922: INFO: Pod "pod-service-account-8a28a010-5179-11ea-a180-0242ac110008-86qn4": Phase="Pending", Reason="", readiness=false. Elapsed: 12.282699572s Feb 17 11:35:51.371: INFO: Pod "pod-service-account-8a28a010-5179-11ea-a180-0242ac110008-86qn4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.731596808s STEP: Saw pod success Feb 17 11:35:51.371: INFO: Pod "pod-service-account-8a28a010-5179-11ea-a180-0242ac110008-86qn4" satisfied condition "success or failure" Feb 17 11:35:51.377: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-service-account-8a28a010-5179-11ea-a180-0242ac110008-86qn4 container namespace-test: STEP: delete the pod Feb 17 11:35:51.607: INFO: Waiting for pod pod-service-account-8a28a010-5179-11ea-a180-0242ac110008-86qn4 to disappear Feb 17 11:35:51.631: INFO: Pod pod-service-account-8a28a010-5179-11ea-a180-0242ac110008-86qn4 no longer exists [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 17 11:35:51.631: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-svcaccounts-xkzsl" for this suite. Feb 17 11:35:59.695: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 17 11:35:59.853: INFO: namespace: e2e-tests-svcaccounts-xkzsl, resource: bindings, ignored listing per whitelist Feb 17 11:35:59.888: INFO: namespace e2e-tests-svcaccounts-xkzsl deletion completed in 8.245416471s • [SLOW TEST:56.796 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:22 should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 17 11:35:59.889: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the rc1 STEP: create the rc2 STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well STEP: delete the rc simpletest-rc-to-be-deleted STEP: wait for the rc to be deleted STEP: Gathering metrics W0217 11:36:14.539771 8 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Feb 17 11:36:14.539: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 17 11:36:14.540: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-qt9gw" for this suite. Feb 17 11:36:37.674: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 17 11:36:37.731: INFO: namespace: e2e-tests-gc-qt9gw, resource: bindings, ignored listing per whitelist Feb 17 11:36:37.835: INFO: namespace e2e-tests-gc-qt9gw deletion completed in 23.286693399s • [SLOW TEST:37.946 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 17 11:36:37.836: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:85 [It] should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating service multi-endpoint-test in namespace e2e-tests-services-4zgq9 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-4zgq9 to expose endpoints map[] Feb 17 11:36:38.176: INFO: Get endpoints failed (18.78324ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found Feb 17 11:36:39.197: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-4zgq9 exposes endpoints map[] (1.039809272s elapsed) STEP: Creating pod pod1 in namespace e2e-tests-services-4zgq9 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-4zgq9 to expose endpoints map[pod1:[100]] Feb 17 11:36:44.232: INFO: Unexpected endpoints: found map[], expected map[pod1:[100]] (4.997744261s elapsed, will retry) Feb 17 11:36:49.662: INFO: Unexpected endpoints: found map[], expected map[pod1:[100]] (10.427543547s elapsed, will retry) Feb 17 11:36:50.682: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-4zgq9 exposes endpoints map[pod1:[100]] (11.447976076s elapsed) STEP: Creating pod pod2 in namespace e2e-tests-services-4zgq9 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-4zgq9 to expose endpoints map[pod1:[100] pod2:[101]] Feb 17 11:36:55.338: INFO: Unexpected endpoints: found map[c2f7f80c-5179-11ea-a994-fa163e34d433:[100]], expected map[pod1:[100] pod2:[101]] (4.64582483s elapsed, will retry) Feb 17 11:36:58.441: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-4zgq9 exposes endpoints map[pod2:[101] pod1:[100]] (7.748028546s elapsed) STEP: Deleting pod pod1 in namespace e2e-tests-services-4zgq9 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-4zgq9 to expose endpoints map[pod2:[101]] Feb 17 11:36:59.501: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-4zgq9 exposes endpoints map[pod2:[101]] (1.03908764s elapsed) STEP: Deleting pod pod2 in namespace e2e-tests-services-4zgq9 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-4zgq9 to expose endpoints map[] Feb 17 11:37:01.359: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-4zgq9 exposes endpoints map[] (1.846298335s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 17 11:37:04.349: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-services-4zgq9" for this suite. Feb 17 11:37:26.849: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 17 11:37:26.910: INFO: namespace: e2e-tests-services-4zgq9, resource: bindings, ignored listing per whitelist Feb 17 11:37:26.992: INFO: namespace e2e-tests-services-4zgq9 deletion completed in 22.613541686s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:90 • [SLOW TEST:49.156 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 17 11:37:26.992: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-nn9ts STEP: creating a selector STEP: Creating the service pods in kubernetes Feb 17 11:37:27.143: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Feb 17 11:37:57.344: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.32.0.4:8080/hostName | grep -v '^\s*$'] Namespace:e2e-tests-pod-network-test-nn9ts PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 17 11:37:57.345: INFO: >>> kubeConfig: /root/.kube/config I0217 11:37:57.510030 8 log.go:172] (0xc001e5a370) (0xc0008d45a0) Create stream I0217 11:37:57.510379 8 log.go:172] (0xc001e5a370) (0xc0008d45a0) Stream added, broadcasting: 1 I0217 11:37:57.523832 8 log.go:172] (0xc001e5a370) Reply frame received for 1 I0217 11:37:57.524025 8 log.go:172] (0xc001e5a370) (0xc0008d4640) Create stream I0217 11:37:57.524040 8 log.go:172] (0xc001e5a370) (0xc0008d4640) Stream added, broadcasting: 3 I0217 11:37:57.526001 8 log.go:172] (0xc001e5a370) Reply frame received for 3 I0217 11:37:57.526056 8 log.go:172] (0xc001e5a370) (0xc0010ac280) Create stream I0217 11:37:57.526063 8 log.go:172] (0xc001e5a370) (0xc0010ac280) Stream added, broadcasting: 5 I0217 11:37:57.527650 8 log.go:172] (0xc001e5a370) Reply frame received for 5 I0217 11:37:57.714723 8 log.go:172] (0xc001e5a370) Data frame received for 3 I0217 11:37:57.714842 8 log.go:172] (0xc0008d4640) (3) Data frame handling I0217 11:37:57.714863 8 log.go:172] (0xc0008d4640) (3) Data frame sent I0217 11:37:57.841865 8 log.go:172] (0xc001e5a370) Data frame received for 1 I0217 11:37:57.842049 8 log.go:172] (0xc001e5a370) (0xc0008d4640) Stream removed, broadcasting: 3 I0217 11:37:57.842158 8 log.go:172] (0xc0008d45a0) (1) Data frame handling I0217 11:37:57.842193 8 log.go:172] (0xc0008d45a0) (1) Data frame sent I0217 11:37:57.842208 8 log.go:172] (0xc001e5a370) (0xc0010ac280) Stream removed, broadcasting: 5 I0217 11:37:57.842281 8 log.go:172] (0xc001e5a370) (0xc0008d45a0) Stream removed, broadcasting: 1 I0217 11:37:57.842301 8 log.go:172] (0xc001e5a370) Go away received I0217 11:37:57.843116 8 log.go:172] (0xc001e5a370) (0xc0008d45a0) Stream removed, broadcasting: 1 I0217 11:37:57.843139 8 log.go:172] (0xc001e5a370) (0xc0008d4640) Stream removed, broadcasting: 3 I0217 11:37:57.843157 8 log.go:172] (0xc001e5a370) (0xc0010ac280) Stream removed, broadcasting: 5 Feb 17 11:37:57.843: INFO: Found all expected endpoints: [netserver-0] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 17 11:37:57.843: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pod-network-test-nn9ts" for this suite. Feb 17 11:38:21.902: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 17 11:38:22.072: INFO: namespace: e2e-tests-pod-network-test-nn9ts, resource: bindings, ignored listing per whitelist Feb 17 11:38:22.082: INFO: namespace e2e-tests-pod-network-test-nn9ts deletion completed in 24.219165778s • [SLOW TEST:55.089 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for node-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 17 11:38:22.082: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a pod in the namespace STEP: Waiting for the pod to have running status STEP: Creating an uninitialized pod in the namespace Feb 17 11:38:32.816: INFO: error from create uninitialized namespace: STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there are no pods in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 17 11:38:59.525: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-namespaces-fngtx" for this suite. Feb 17 11:39:07.573: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 17 11:39:07.691: INFO: namespace: e2e-tests-namespaces-fngtx, resource: bindings, ignored listing per whitelist Feb 17 11:39:07.756: INFO: namespace e2e-tests-namespaces-fngtx deletion completed in 8.223768046s STEP: Destroying namespace "e2e-tests-nsdeletetest-wk9zh" for this suite. Feb 17 11:39:07.763: INFO: Namespace e2e-tests-nsdeletetest-wk9zh was already deleted STEP: Destroying namespace "e2e-tests-nsdeletetest-68jmh" for this suite. Feb 17 11:39:13.801: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 17 11:39:14.012: INFO: namespace: e2e-tests-nsdeletetest-68jmh, resource: bindings, ignored listing per whitelist Feb 17 11:39:14.028: INFO: namespace e2e-tests-nsdeletetest-68jmh deletion completed in 6.264134181s • [SLOW TEST:51.946 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 17 11:39:14.028: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward api env vars Feb 17 11:39:14.293: INFO: Waiting up to 5m0s for pod "downward-api-1f63af57-517a-11ea-a180-0242ac110008" in namespace "e2e-tests-downward-api-g9cht" to be "success or failure" Feb 17 11:39:14.403: INFO: Pod "downward-api-1f63af57-517a-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 110.199791ms Feb 17 11:39:16.585: INFO: Pod "downward-api-1f63af57-517a-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.292098952s Feb 17 11:39:18.619: INFO: Pod "downward-api-1f63af57-517a-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.326090933s Feb 17 11:39:20.754: INFO: Pod "downward-api-1f63af57-517a-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.460685924s Feb 17 11:39:22.808: INFO: Pod "downward-api-1f63af57-517a-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.514326571s Feb 17 11:39:24.824: INFO: Pod "downward-api-1f63af57-517a-11ea-a180-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.531153292s STEP: Saw pod success Feb 17 11:39:24.824: INFO: Pod "downward-api-1f63af57-517a-11ea-a180-0242ac110008" satisfied condition "success or failure" Feb 17 11:39:24.828: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downward-api-1f63af57-517a-11ea-a180-0242ac110008 container dapi-container: STEP: delete the pod Feb 17 11:39:24.890: INFO: Waiting for pod downward-api-1f63af57-517a-11ea-a180-0242ac110008 to disappear Feb 17 11:39:24.912: INFO: Pod downward-api-1f63af57-517a-11ea-a180-0242ac110008 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 17 11:39:24.912: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-g9cht" for this suite. Feb 17 11:39:31.033: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 17 11:39:31.225: INFO: namespace: e2e-tests-downward-api-g9cht, resource: bindings, ignored listing per whitelist Feb 17 11:39:31.273: INFO: namespace e2e-tests-downward-api-g9cht deletion completed in 6.352586651s • [SLOW TEST:17.245 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38 should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 17 11:39:31.273: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Feb 17 11:39:31.469: INFO: Waiting up to 5m0s for pod "downwardapi-volume-29a1e32a-517a-11ea-a180-0242ac110008" in namespace "e2e-tests-projected-5r9t4" to be "success or failure" Feb 17 11:39:31.498: INFO: Pod "downwardapi-volume-29a1e32a-517a-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 29.035444ms Feb 17 11:39:33.508: INFO: Pod "downwardapi-volume-29a1e32a-517a-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.038720995s Feb 17 11:39:35.521: INFO: Pod "downwardapi-volume-29a1e32a-517a-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.052364242s Feb 17 11:39:37.536: INFO: Pod "downwardapi-volume-29a1e32a-517a-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.067093254s Feb 17 11:39:39.651: INFO: Pod "downwardapi-volume-29a1e32a-517a-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.181919176s Feb 17 11:39:41.733: INFO: Pod "downwardapi-volume-29a1e32a-517a-11ea-a180-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.264195637s STEP: Saw pod success Feb 17 11:39:41.733: INFO: Pod "downwardapi-volume-29a1e32a-517a-11ea-a180-0242ac110008" satisfied condition "success or failure" Feb 17 11:39:41.740: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-29a1e32a-517a-11ea-a180-0242ac110008 container client-container: STEP: delete the pod Feb 17 11:39:42.112: INFO: Waiting for pod downwardapi-volume-29a1e32a-517a-11ea-a180-0242ac110008 to disappear Feb 17 11:39:42.132: INFO: Pod downwardapi-volume-29a1e32a-517a-11ea-a180-0242ac110008 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 17 11:39:42.133: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-5r9t4" for this suite. Feb 17 11:39:48.200: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 17 11:39:48.399: INFO: namespace: e2e-tests-projected-5r9t4, resource: bindings, ignored listing per whitelist Feb 17 11:39:48.408: INFO: namespace e2e-tests-projected-5r9t4 deletion completed in 6.256523593s • [SLOW TEST:17.135 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 17 11:39:48.409: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with secret that has name projected-secret-test-33f429b6-517a-11ea-a180-0242ac110008 STEP: Creating a pod to test consume secrets Feb 17 11:39:48.795: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-33f588fe-517a-11ea-a180-0242ac110008" in namespace "e2e-tests-projected-hznml" to be "success or failure" Feb 17 11:39:48.827: INFO: Pod "pod-projected-secrets-33f588fe-517a-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 31.632444ms Feb 17 11:39:50.840: INFO: Pod "pod-projected-secrets-33f588fe-517a-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.044802916s Feb 17 11:39:52.862: INFO: Pod "pod-projected-secrets-33f588fe-517a-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.067222992s Feb 17 11:39:54.913: INFO: Pod "pod-projected-secrets-33f588fe-517a-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.118101575s Feb 17 11:39:56.942: INFO: Pod "pod-projected-secrets-33f588fe-517a-11ea-a180-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.146915282s STEP: Saw pod success Feb 17 11:39:56.942: INFO: Pod "pod-projected-secrets-33f588fe-517a-11ea-a180-0242ac110008" satisfied condition "success or failure" Feb 17 11:39:56.955: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-33f588fe-517a-11ea-a180-0242ac110008 container projected-secret-volume-test: STEP: delete the pod Feb 17 11:39:57.141: INFO: Waiting for pod pod-projected-secrets-33f588fe-517a-11ea-a180-0242ac110008 to disappear Feb 17 11:39:57.150: INFO: Pod pod-projected-secrets-33f588fe-517a-11ea-a180-0242ac110008 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 17 11:39:57.150: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-hznml" for this suite. Feb 17 11:40:03.194: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 17 11:40:03.305: INFO: namespace: e2e-tests-projected-hznml, resource: bindings, ignored listing per whitelist Feb 17 11:40:03.373: INFO: namespace e2e-tests-projected-hznml deletion completed in 6.215365967s • [SLOW TEST:14.964 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 17 11:40:03.373: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-3ccafe09-517a-11ea-a180-0242ac110008 STEP: Creating a pod to test consume configMaps Feb 17 11:40:03.675: INFO: Waiting up to 5m0s for pod "pod-configmaps-3cce85ca-517a-11ea-a180-0242ac110008" in namespace "e2e-tests-configmap-4n4cs" to be "success or failure" Feb 17 11:40:03.809: INFO: Pod "pod-configmaps-3cce85ca-517a-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 133.872758ms Feb 17 11:40:05.837: INFO: Pod "pod-configmaps-3cce85ca-517a-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.161099086s Feb 17 11:40:07.856: INFO: Pod "pod-configmaps-3cce85ca-517a-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.180877145s Feb 17 11:40:09.892: INFO: Pod "pod-configmaps-3cce85ca-517a-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.216893662s Feb 17 11:40:11.909: INFO: Pod "pod-configmaps-3cce85ca-517a-11ea-a180-0242ac110008": Phase="Running", Reason="", readiness=true. Elapsed: 8.233329965s Feb 17 11:40:14.055: INFO: Pod "pod-configmaps-3cce85ca-517a-11ea-a180-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.379881584s STEP: Saw pod success Feb 17 11:40:14.056: INFO: Pod "pod-configmaps-3cce85ca-517a-11ea-a180-0242ac110008" satisfied condition "success or failure" Feb 17 11:40:14.096: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-3cce85ca-517a-11ea-a180-0242ac110008 container configmap-volume-test: STEP: delete the pod Feb 17 11:40:14.346: INFO: Waiting for pod pod-configmaps-3cce85ca-517a-11ea-a180-0242ac110008 to disappear Feb 17 11:40:14.386: INFO: Pod pod-configmaps-3cce85ca-517a-11ea-a180-0242ac110008 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 17 11:40:14.387: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-4n4cs" for this suite. Feb 17 11:40:20.446: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 17 11:40:20.527: INFO: namespace: e2e-tests-configmap-4n4cs, resource: bindings, ignored listing per whitelist Feb 17 11:40:20.650: INFO: namespace e2e-tests-configmap-4n4cs deletion completed in 6.253454736s • [SLOW TEST:17.276 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 17 11:40:20.651: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Feb 17 11:40:20.946: INFO: Waiting up to 5m0s for pod "downwardapi-volume-471eb850-517a-11ea-a180-0242ac110008" in namespace "e2e-tests-downward-api-n5qnp" to be "success or failure" Feb 17 11:40:20.979: INFO: Pod "downwardapi-volume-471eb850-517a-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 33.286826ms Feb 17 11:40:22.995: INFO: Pod "downwardapi-volume-471eb850-517a-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.048650564s Feb 17 11:40:25.942: INFO: Pod "downwardapi-volume-471eb850-517a-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.996071258s Feb 17 11:40:27.956: INFO: Pod "downwardapi-volume-471eb850-517a-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 7.010000452s Feb 17 11:40:29.974: INFO: Pod "downwardapi-volume-471eb850-517a-11ea-a180-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 9.028472024s STEP: Saw pod success Feb 17 11:40:29.975: INFO: Pod "downwardapi-volume-471eb850-517a-11ea-a180-0242ac110008" satisfied condition "success or failure" Feb 17 11:40:29.981: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-471eb850-517a-11ea-a180-0242ac110008 container client-container: STEP: delete the pod Feb 17 11:40:30.818: INFO: Waiting for pod downwardapi-volume-471eb850-517a-11ea-a180-0242ac110008 to disappear Feb 17 11:40:30.851: INFO: Pod downwardapi-volume-471eb850-517a-11ea-a180-0242ac110008 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 17 11:40:30.852: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-n5qnp" for this suite. Feb 17 11:40:36.939: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 17 11:40:37.042: INFO: namespace: e2e-tests-downward-api-n5qnp, resource: bindings, ignored listing per whitelist Feb 17 11:40:37.198: INFO: namespace e2e-tests-downward-api-n5qnp deletion completed in 6.33066417s • [SLOW TEST:16.548 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 17 11:40:37.199: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod STEP: setting up watch STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: verifying pod creation was observed Feb 17 11:40:49.640: INFO: running pod: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-submit-remove-50f7397f-517a-11ea-a180-0242ac110008", GenerateName:"", Namespace:"e2e-tests-pods-jlp76", SelfLink:"/api/v1/namespaces/e2e-tests-pods-jlp76/pods/pod-submit-remove-50f7397f-517a-11ea-a180-0242ac110008", UID:"50fa93d6-517a-11ea-a994-fa163e34d433", ResourceVersion:"21972402", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63717536437, loc:(*time.Location)(0x7950ac0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"446111954"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-n94sg", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc001ee4d40), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"nginx", Image:"docker.io/library/nginx:1.14-alpine", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-n94sg", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc00203c338), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"hunter-server-hu5at5svl7ps", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc001b26c00), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc00203c370)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc00203c390)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc00203c398), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc00203c39c)}, Status:v1.PodStatus{Phase:"Running", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717536437, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"Ready", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717536448, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"ContainersReady", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717536448, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717536437, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"10.96.1.240", PodIP:"10.32.0.4", StartTime:(*v1.Time)(0xc0021d6160), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"nginx", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(0xc0021d6180), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:true, RestartCount:0, Image:"nginx:1.14-alpine", ImageID:"docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7", ContainerID:"docker://b302ea0df9105902bcc548ce5e1fa0904135c2c964a6dc8041928834c3e81280"}}, QOSClass:"BestEffort"}} STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice STEP: verifying pod deletion was observed [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 17 11:40:55.126: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-jlp76" for this suite. Feb 17 11:41:01.250: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 17 11:41:01.365: INFO: namespace: e2e-tests-pods-jlp76, resource: bindings, ignored listing per whitelist Feb 17 11:41:01.401: INFO: namespace e2e-tests-pods-jlp76 deletion completed in 6.261962843s • [SLOW TEST:24.202 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 17 11:41:01.401: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod liveness-exec in namespace e2e-tests-container-probe-8nwz7 Feb 17 11:41:11.645: INFO: Started pod liveness-exec in namespace e2e-tests-container-probe-8nwz7 STEP: checking the pod's current state and verifying that restartCount is present Feb 17 11:41:11.654: INFO: Initial restart count of pod liveness-exec is 0 Feb 17 11:42:06.241: INFO: Restart count of pod e2e-tests-container-probe-8nwz7/liveness-exec is now 1 (54.587177189s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 17 11:42:06.487: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-8nwz7" for this suite. Feb 17 11:42:14.703: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 17 11:42:14.846: INFO: namespace: e2e-tests-container-probe-8nwz7, resource: bindings, ignored listing per whitelist Feb 17 11:42:14.883: INFO: namespace e2e-tests-container-probe-8nwz7 deletion completed in 8.367470534s • [SLOW TEST:73.483 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 17 11:42:14.884: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a service in the namespace STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there is no service in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 17 11:42:21.897: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-namespaces-v62km" for this suite. Feb 17 11:42:28.010: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 17 11:42:28.151: INFO: namespace: e2e-tests-namespaces-v62km, resource: bindings, ignored listing per whitelist Feb 17 11:42:28.164: INFO: namespace e2e-tests-namespaces-v62km deletion completed in 6.259210858s STEP: Destroying namespace "e2e-tests-nsdeletetest-45x2p" for this suite. Feb 17 11:42:28.169: INFO: Namespace e2e-tests-nsdeletetest-45x2p was already deleted STEP: Destroying namespace "e2e-tests-nsdeletetest-f9d5d" for this suite. Feb 17 11:42:34.255: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 17 11:42:34.350: INFO: namespace: e2e-tests-nsdeletetest-f9d5d, resource: bindings, ignored listing per whitelist Feb 17 11:42:34.357: INFO: namespace e2e-tests-nsdeletetest-f9d5d deletion completed in 6.188053535s • [SLOW TEST:19.474 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 17 11:42:34.358: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74 STEP: Creating service test in namespace e2e-tests-statefulset-6qfcf [It] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a new StaefulSet Feb 17 11:42:34.839: INFO: Found 0 stateful pods, waiting for 3 Feb 17 11:42:45.037: INFO: Found 2 stateful pods, waiting for 3 Feb 17 11:42:54.868: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Feb 17 11:42:54.869: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Feb 17 11:42:54.869: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false Feb 17 11:43:04.871: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Feb 17 11:43:04.871: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Feb 17 11:43:04.871: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Updating stateful set template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine Feb 17 11:43:04.943: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Not applying an update when the partition is greater than the number of replicas STEP: Performing a canary update Feb 17 11:43:15.012: INFO: Updating stateful set ss2 Feb 17 11:43:15.163: INFO: Waiting for Pod e2e-tests-statefulset-6qfcf/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c STEP: Restoring Pods to the correct revision when they are deleted Feb 17 11:43:25.602: INFO: Found 2 stateful pods, waiting for 3 Feb 17 11:43:36.124: INFO: Found 2 stateful pods, waiting for 3 Feb 17 11:43:46.153: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Feb 17 11:43:46.153: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Feb 17 11:43:46.153: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false Feb 17 11:43:55.620: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Feb 17 11:43:55.620: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Feb 17 11:43:55.620: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Performing a phased rolling update Feb 17 11:43:55.669: INFO: Updating stateful set ss2 Feb 17 11:43:55.989: INFO: Waiting for Pod e2e-tests-statefulset-6qfcf/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Feb 17 11:44:06.040: INFO: Waiting for Pod e2e-tests-statefulset-6qfcf/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Feb 17 11:44:16.658: INFO: Updating stateful set ss2 Feb 17 11:44:16.828: INFO: Waiting for StatefulSet e2e-tests-statefulset-6qfcf/ss2 to complete update Feb 17 11:44:16.828: INFO: Waiting for Pod e2e-tests-statefulset-6qfcf/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Feb 17 11:44:26.918: INFO: Waiting for StatefulSet e2e-tests-statefulset-6qfcf/ss2 to complete update Feb 17 11:44:26.919: INFO: Waiting for Pod e2e-tests-statefulset-6qfcf/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Feb 17 11:44:36.924: INFO: Waiting for StatefulSet e2e-tests-statefulset-6qfcf/ss2 to complete update [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85 Feb 17 11:44:46.886: INFO: Deleting all statefulset in ns e2e-tests-statefulset-6qfcf Feb 17 11:44:46.897: INFO: Scaling statefulset ss2 to 0 Feb 17 11:45:27.021: INFO: Waiting for statefulset status.replicas updated to 0 Feb 17 11:45:27.032: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 17 11:45:27.099: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-statefulset-6qfcf" for this suite. Feb 17 11:45:35.236: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 17 11:45:35.314: INFO: namespace: e2e-tests-statefulset-6qfcf, resource: bindings, ignored listing per whitelist Feb 17 11:45:35.387: INFO: namespace e2e-tests-statefulset-6qfcf deletion completed in 8.273565008s • [SLOW TEST:181.030 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 17 11:45:35.388: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-02b0cfdb-517b-11ea-a180-0242ac110008 STEP: Creating a pod to test consume configMaps Feb 17 11:45:35.658: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-02b1e941-517b-11ea-a180-0242ac110008" in namespace "e2e-tests-projected-wjr79" to be "success or failure" Feb 17 11:45:35.688: INFO: Pod "pod-projected-configmaps-02b1e941-517b-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 29.118505ms Feb 17 11:45:37.971: INFO: Pod "pod-projected-configmaps-02b1e941-517b-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.312574042s Feb 17 11:45:39.999: INFO: Pod "pod-projected-configmaps-02b1e941-517b-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.340917077s Feb 17 11:45:42.103: INFO: Pod "pod-projected-configmaps-02b1e941-517b-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.444120262s Feb 17 11:45:44.438: INFO: Pod "pod-projected-configmaps-02b1e941-517b-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.779509891s Feb 17 11:45:46.872: INFO: Pod "pod-projected-configmaps-02b1e941-517b-11ea-a180-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.213409007s STEP: Saw pod success Feb 17 11:45:46.872: INFO: Pod "pod-projected-configmaps-02b1e941-517b-11ea-a180-0242ac110008" satisfied condition "success or failure" Feb 17 11:45:47.400: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-02b1e941-517b-11ea-a180-0242ac110008 container projected-configmap-volume-test: STEP: delete the pod Feb 17 11:45:47.599: INFO: Waiting for pod pod-projected-configmaps-02b1e941-517b-11ea-a180-0242ac110008 to disappear Feb 17 11:45:47.611: INFO: Pod pod-projected-configmaps-02b1e941-517b-11ea-a180-0242ac110008 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 17 11:45:47.611: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-wjr79" for this suite. Feb 17 11:45:53.672: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 17 11:45:53.744: INFO: namespace: e2e-tests-projected-wjr79, resource: bindings, ignored listing per whitelist Feb 17 11:45:53.877: INFO: namespace e2e-tests-projected-wjr79 deletion completed in 6.257260946s • [SLOW TEST:18.489 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 17 11:45:53.878: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0666 on node default medium Feb 17 11:45:54.229: INFO: Waiting up to 5m0s for pod "pod-0dc5613a-517b-11ea-a180-0242ac110008" in namespace "e2e-tests-emptydir-m9ggx" to be "success or failure" Feb 17 11:45:54.255: INFO: Pod "pod-0dc5613a-517b-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 25.578234ms Feb 17 11:45:56.661: INFO: Pod "pod-0dc5613a-517b-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.431477286s Feb 17 11:45:58.669: INFO: Pod "pod-0dc5613a-517b-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.439924008s Feb 17 11:46:00.824: INFO: Pod "pod-0dc5613a-517b-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.594382548s Feb 17 11:46:02.843: INFO: Pod "pod-0dc5613a-517b-11ea-a180-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.613111329s STEP: Saw pod success Feb 17 11:46:02.843: INFO: Pod "pod-0dc5613a-517b-11ea-a180-0242ac110008" satisfied condition "success or failure" Feb 17 11:46:02.857: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-0dc5613a-517b-11ea-a180-0242ac110008 container test-container: STEP: delete the pod Feb 17 11:46:02.958: INFO: Waiting for pod pod-0dc5613a-517b-11ea-a180-0242ac110008 to disappear Feb 17 11:46:02.995: INFO: Pod pod-0dc5613a-517b-11ea-a180-0242ac110008 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 17 11:46:02.995: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-m9ggx" for this suite. Feb 17 11:46:09.172: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 17 11:46:09.398: INFO: namespace: e2e-tests-emptydir-m9ggx, resource: bindings, ignored listing per whitelist Feb 17 11:46:09.402: INFO: namespace e2e-tests-emptydir-m9ggx deletion completed in 6.398410358s • [SLOW TEST:15.525 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 17 11:46:09.403: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test override arguments Feb 17 11:46:09.800: INFO: Waiting up to 5m0s for pod "client-containers-17098389-517b-11ea-a180-0242ac110008" in namespace "e2e-tests-containers-lj4d8" to be "success or failure" Feb 17 11:46:09.901: INFO: Pod "client-containers-17098389-517b-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 101.270793ms Feb 17 11:46:11.922: INFO: Pod "client-containers-17098389-517b-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.122176873s Feb 17 11:46:13.941: INFO: Pod "client-containers-17098389-517b-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.140929854s Feb 17 11:46:15.962: INFO: Pod "client-containers-17098389-517b-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.161925355s Feb 17 11:46:18.025: INFO: Pod "client-containers-17098389-517b-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.22513037s Feb 17 11:46:20.038: INFO: Pod "client-containers-17098389-517b-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 10.238238416s Feb 17 11:46:22.050: INFO: Pod "client-containers-17098389-517b-11ea-a180-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.250649207s STEP: Saw pod success Feb 17 11:46:22.051: INFO: Pod "client-containers-17098389-517b-11ea-a180-0242ac110008" satisfied condition "success or failure" Feb 17 11:46:22.055: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod client-containers-17098389-517b-11ea-a180-0242ac110008 container test-container: STEP: delete the pod Feb 17 11:46:22.582: INFO: Waiting for pod client-containers-17098389-517b-11ea-a180-0242ac110008 to disappear Feb 17 11:46:22.877: INFO: Pod client-containers-17098389-517b-11ea-a180-0242ac110008 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 17 11:46:22.878: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-containers-lj4d8" for this suite. Feb 17 11:46:31.030: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 17 11:46:31.140: INFO: namespace: e2e-tests-containers-lj4d8, resource: bindings, ignored listing per whitelist Feb 17 11:46:31.197: INFO: namespace e2e-tests-containers-lj4d8 deletion completed in 8.29917386s • [SLOW TEST:21.794 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 17 11:46:31.197: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Feb 17 11:46:31.403: INFO: Waiting up to 5m0s for pod "downwardapi-volume-23ed6b14-517b-11ea-a180-0242ac110008" in namespace "e2e-tests-downward-api-lb6nc" to be "success or failure" Feb 17 11:46:31.478: INFO: Pod "downwardapi-volume-23ed6b14-517b-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 75.416039ms Feb 17 11:46:33.649: INFO: Pod "downwardapi-volume-23ed6b14-517b-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.246359964s Feb 17 11:46:35.664: INFO: Pod "downwardapi-volume-23ed6b14-517b-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.261307835s Feb 17 11:46:37.887: INFO: Pod "downwardapi-volume-23ed6b14-517b-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.484372078s Feb 17 11:46:39.952: INFO: Pod "downwardapi-volume-23ed6b14-517b-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.548701642s Feb 17 11:46:42.305: INFO: Pod "downwardapi-volume-23ed6b14-517b-11ea-a180-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.902414132s STEP: Saw pod success Feb 17 11:46:42.306: INFO: Pod "downwardapi-volume-23ed6b14-517b-11ea-a180-0242ac110008" satisfied condition "success or failure" Feb 17 11:46:42.327: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-23ed6b14-517b-11ea-a180-0242ac110008 container client-container: STEP: delete the pod Feb 17 11:46:42.697: INFO: Waiting for pod downwardapi-volume-23ed6b14-517b-11ea-a180-0242ac110008 to disappear Feb 17 11:46:42.722: INFO: Pod downwardapi-volume-23ed6b14-517b-11ea-a180-0242ac110008 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 17 11:46:42.722: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-lb6nc" for this suite. Feb 17 11:46:48.896: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 17 11:46:48.982: INFO: namespace: e2e-tests-downward-api-lb6nc, resource: bindings, ignored listing per whitelist Feb 17 11:46:49.131: INFO: namespace e2e-tests-downward-api-lb6nc deletion completed in 6.394604577s • [SLOW TEST:17.934 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 17 11:46:49.132: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Feb 17 11:47:19.375: INFO: Container started at 2020-02-17 11:46:56 +0000 UTC, pod became ready at 2020-02-17 11:47:17 +0000 UTC [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 17 11:47:19.375: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-s65qg" for this suite. Feb 17 11:47:43.660: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 17 11:47:43.740: INFO: namespace: e2e-tests-container-probe-s65qg, resource: bindings, ignored listing per whitelist Feb 17 11:47:43.885: INFO: namespace e2e-tests-container-probe-s65qg deletion completed in 24.495475536s • [SLOW TEST:54.753 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-apps] ReplicationController should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 17 11:47:43.885: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Given a ReplicationController is created STEP: When the matched label of one of its pods change Feb 17 11:47:44.202: INFO: Pod name pod-release: Found 0 pods out of 1 Feb 17 11:47:49.222: INFO: Pod name pod-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 17 11:47:51.403: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-replication-controller-w8qd5" for this suite. Feb 17 11:48:00.920: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 17 11:48:01.255: INFO: namespace: e2e-tests-replication-controller-w8qd5, resource: bindings, ignored listing per whitelist Feb 17 11:48:01.316: INFO: namespace e2e-tests-replication-controller-w8qd5 deletion completed in 9.390926899s • [SLOW TEST:17.431 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 17 11:48:01.316: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102 [It] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Feb 17 11:48:02.364: INFO: Requires at least 2 nodes (not -1) [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68 Feb 17 11:48:02.405: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-zjwmj/daemonsets","resourceVersion":"21973414"},"items":null} Feb 17 11:48:02.482: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-zjwmj/pods","resourceVersion":"21973414"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 17 11:48:02.546: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-daemonsets-zjwmj" for this suite. Feb 17 11:48:10.642: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 17 11:48:10.689: INFO: namespace: e2e-tests-daemonsets-zjwmj, resource: bindings, ignored listing per whitelist Feb 17 11:48:10.744: INFO: namespace e2e-tests-daemonsets-zjwmj deletion completed in 8.167787936s S [SKIPPING] [9.428 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should rollback without unnecessary restarts [Conformance] [It] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Feb 17 11:48:02.364: Requires at least 2 nodes (not -1) /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:292 ------------------------------ S ------------------------------ [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 17 11:48:10.744: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods STEP: Gathering metrics W0217 11:48:52.616197 8 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Feb 17 11:48:52.616: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 17 11:48:52.616: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-sk68r" for this suite. Feb 17 11:49:00.710: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 17 11:49:00.925: INFO: namespace: e2e-tests-gc-sk68r, resource: bindings, ignored listing per whitelist Feb 17 11:49:04.116: INFO: namespace e2e-tests-gc-sk68r deletion completed in 11.481339942s • [SLOW TEST:53.372 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 17 11:49:04.117: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name cm-test-opt-del-7f5bec24-517b-11ea-a180-0242ac110008 STEP: Creating configMap with name cm-test-opt-upd-7f5bed17-517b-11ea-a180-0242ac110008 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-7f5bec24-517b-11ea-a180-0242ac110008 STEP: Updating configmap cm-test-opt-upd-7f5bed17-517b-11ea-a180-0242ac110008 STEP: Creating configMap with name cm-test-opt-create-7f5bed4e-517b-11ea-a180-0242ac110008 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 17 11:51:05.175: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-t2szg" for this suite. Feb 17 11:51:31.226: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 17 11:51:31.298: INFO: namespace: e2e-tests-configmap-t2szg, resource: bindings, ignored listing per whitelist Feb 17 11:51:31.515: INFO: namespace e2e-tests-configmap-t2szg deletion completed in 26.334358724s • [SLOW TEST:147.399 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 17 11:51:31.516: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Feb 17 11:51:31.718: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d6ef28a8-517b-11ea-a180-0242ac110008" in namespace "e2e-tests-projected-hv8qf" to be "success or failure" Feb 17 11:51:31.731: INFO: Pod "downwardapi-volume-d6ef28a8-517b-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 12.726467ms Feb 17 11:51:33.919: INFO: Pod "downwardapi-volume-d6ef28a8-517b-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.201421024s Feb 17 11:51:35.933: INFO: Pod "downwardapi-volume-d6ef28a8-517b-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.215085701s Feb 17 11:51:38.310: INFO: Pod "downwardapi-volume-d6ef28a8-517b-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.591841042s Feb 17 11:51:40.355: INFO: Pod "downwardapi-volume-d6ef28a8-517b-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.637099287s Feb 17 11:51:42.381: INFO: Pod "downwardapi-volume-d6ef28a8-517b-11ea-a180-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.662908483s STEP: Saw pod success Feb 17 11:51:42.381: INFO: Pod "downwardapi-volume-d6ef28a8-517b-11ea-a180-0242ac110008" satisfied condition "success or failure" Feb 17 11:51:42.387: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-d6ef28a8-517b-11ea-a180-0242ac110008 container client-container: STEP: delete the pod Feb 17 11:51:42.686: INFO: Waiting for pod downwardapi-volume-d6ef28a8-517b-11ea-a180-0242ac110008 to disappear Feb 17 11:51:42.705: INFO: Pod downwardapi-volume-d6ef28a8-517b-11ea-a180-0242ac110008 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 17 11:51:42.706: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-hv8qf" for this suite. Feb 17 11:51:48.839: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 17 11:51:48.990: INFO: namespace: e2e-tests-projected-hv8qf, resource: bindings, ignored listing per whitelist Feb 17 11:51:49.006: INFO: namespace e2e-tests-projected-hv8qf deletion completed in 6.285907828s • [SLOW TEST:17.491 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 17 11:51:49.007: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Feb 17 11:51:49.294: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e1664d8b-517b-11ea-a180-0242ac110008" in namespace "e2e-tests-downward-api-qm9tb" to be "success or failure" Feb 17 11:51:49.339: INFO: Pod "downwardapi-volume-e1664d8b-517b-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 44.849078ms Feb 17 11:51:51.357: INFO: Pod "downwardapi-volume-e1664d8b-517b-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.06267885s Feb 17 11:51:53.417: INFO: Pod "downwardapi-volume-e1664d8b-517b-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.122769966s Feb 17 11:51:55.738: INFO: Pod "downwardapi-volume-e1664d8b-517b-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.443255204s Feb 17 11:51:58.204: INFO: Pod "downwardapi-volume-e1664d8b-517b-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.909867215s Feb 17 11:52:00.222: INFO: Pod "downwardapi-volume-e1664d8b-517b-11ea-a180-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.927570135s STEP: Saw pod success Feb 17 11:52:00.222: INFO: Pod "downwardapi-volume-e1664d8b-517b-11ea-a180-0242ac110008" satisfied condition "success or failure" Feb 17 11:52:00.230: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-e1664d8b-517b-11ea-a180-0242ac110008 container client-container: STEP: delete the pod Feb 17 11:52:00.416: INFO: Waiting for pod downwardapi-volume-e1664d8b-517b-11ea-a180-0242ac110008 to disappear Feb 17 11:52:00.429: INFO: Pod downwardapi-volume-e1664d8b-517b-11ea-a180-0242ac110008 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 17 11:52:00.429: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-qm9tb" for this suite. Feb 17 11:52:06.568: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 17 11:52:06.674: INFO: namespace: e2e-tests-downward-api-qm9tb, resource: bindings, ignored listing per whitelist Feb 17 11:52:06.697: INFO: namespace e2e-tests-downward-api-qm9tb deletion completed in 6.258536639s • [SLOW TEST:17.690 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 17 11:52:06.697: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with secret that has name projected-secret-test-map-ebed3e09-517b-11ea-a180-0242ac110008 STEP: Creating a pod to test consume secrets Feb 17 11:52:06.967: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-ebf0b7dc-517b-11ea-a180-0242ac110008" in namespace "e2e-tests-projected-vbpft" to be "success or failure" Feb 17 11:52:06.982: INFO: Pod "pod-projected-secrets-ebf0b7dc-517b-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 14.248462ms Feb 17 11:52:09.001: INFO: Pod "pod-projected-secrets-ebf0b7dc-517b-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033388911s Feb 17 11:52:11.026: INFO: Pod "pod-projected-secrets-ebf0b7dc-517b-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.058603329s Feb 17 11:52:13.039: INFO: Pod "pod-projected-secrets-ebf0b7dc-517b-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.071800988s Feb 17 11:52:15.719: INFO: Pod "pod-projected-secrets-ebf0b7dc-517b-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.751491744s Feb 17 11:52:17.736: INFO: Pod "pod-projected-secrets-ebf0b7dc-517b-11ea-a180-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.768047908s STEP: Saw pod success Feb 17 11:52:17.736: INFO: Pod "pod-projected-secrets-ebf0b7dc-517b-11ea-a180-0242ac110008" satisfied condition "success or failure" Feb 17 11:52:17.741: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-ebf0b7dc-517b-11ea-a180-0242ac110008 container projected-secret-volume-test: STEP: delete the pod Feb 17 11:52:17.894: INFO: Waiting for pod pod-projected-secrets-ebf0b7dc-517b-11ea-a180-0242ac110008 to disappear Feb 17 11:52:17.899: INFO: Pod pod-projected-secrets-ebf0b7dc-517b-11ea-a180-0242ac110008 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 17 11:52:17.899: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-vbpft" for this suite. Feb 17 11:52:23.980: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 17 11:52:24.116: INFO: namespace: e2e-tests-projected-vbpft, resource: bindings, ignored listing per whitelist Feb 17 11:52:24.181: INFO: namespace e2e-tests-projected-vbpft deletion completed in 6.258450184s • [SLOW TEST:17.484 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 17 11:52:24.182: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-h2x5f STEP: creating a selector STEP: Creating the service pods in kubernetes Feb 17 11:52:24.296: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Feb 17 11:53:06.686: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.32.0.5:8080/dial?request=hostName&protocol=http&host=10.32.0.4&port=8080&tries=1'] Namespace:e2e-tests-pod-network-test-h2x5f PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 17 11:53:06.686: INFO: >>> kubeConfig: /root/.kube/config I0217 11:53:06.814388 8 log.go:172] (0xc001d38420) (0xc0023a0640) Create stream I0217 11:53:06.814522 8 log.go:172] (0xc001d38420) (0xc0023a0640) Stream added, broadcasting: 1 I0217 11:53:06.821871 8 log.go:172] (0xc001d38420) Reply frame received for 1 I0217 11:53:06.821910 8 log.go:172] (0xc001d38420) (0xc0023a06e0) Create stream I0217 11:53:06.821919 8 log.go:172] (0xc001d38420) (0xc0023a06e0) Stream added, broadcasting: 3 I0217 11:53:06.823992 8 log.go:172] (0xc001d38420) Reply frame received for 3 I0217 11:53:06.824079 8 log.go:172] (0xc001d38420) (0xc00240c000) Create stream I0217 11:53:06.824100 8 log.go:172] (0xc001d38420) (0xc00240c000) Stream added, broadcasting: 5 I0217 11:53:06.828787 8 log.go:172] (0xc001d38420) Reply frame received for 5 I0217 11:53:07.199007 8 log.go:172] (0xc001d38420) Data frame received for 3 I0217 11:53:07.199166 8 log.go:172] (0xc0023a06e0) (3) Data frame handling I0217 11:53:07.199196 8 log.go:172] (0xc0023a06e0) (3) Data frame sent I0217 11:53:07.382876 8 log.go:172] (0xc001d38420) Data frame received for 1 I0217 11:53:07.383079 8 log.go:172] (0xc0023a0640) (1) Data frame handling I0217 11:53:07.383121 8 log.go:172] (0xc0023a0640) (1) Data frame sent I0217 11:53:07.383209 8 log.go:172] (0xc001d38420) (0xc0023a0640) Stream removed, broadcasting: 1 I0217 11:53:07.383536 8 log.go:172] (0xc001d38420) (0xc00240c000) Stream removed, broadcasting: 5 I0217 11:53:07.383642 8 log.go:172] (0xc001d38420) (0xc0023a06e0) Stream removed, broadcasting: 3 I0217 11:53:07.383715 8 log.go:172] (0xc001d38420) (0xc0023a0640) Stream removed, broadcasting: 1 I0217 11:53:07.383726 8 log.go:172] (0xc001d38420) (0xc0023a06e0) Stream removed, broadcasting: 3 I0217 11:53:07.383744 8 log.go:172] (0xc001d38420) (0xc00240c000) Stream removed, broadcasting: 5 Feb 17 11:53:07.384: INFO: Waiting for endpoints: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 17 11:53:07.384: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pod-network-test-h2x5f" for this suite. Feb 17 11:53:33.641: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 17 11:53:33.778: INFO: namespace: e2e-tests-pod-network-test-h2x5f, resource: bindings, ignored listing per whitelist Feb 17 11:53:33.820: INFO: namespace e2e-tests-pod-network-test-h2x5f deletion completed in 26.404533101s • [SLOW TEST:69.638 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 17 11:53:33.821: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Feb 17 11:53:34.194: INFO: Waiting up to 5m0s for pod "downwardapi-volume-1fe64b73-517c-11ea-a180-0242ac110008" in namespace "e2e-tests-projected-kjk67" to be "success or failure" Feb 17 11:53:34.205: INFO: Pod "downwardapi-volume-1fe64b73-517c-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 11.033274ms Feb 17 11:53:36.222: INFO: Pod "downwardapi-volume-1fe64b73-517c-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027510103s Feb 17 11:53:38.234: INFO: Pod "downwardapi-volume-1fe64b73-517c-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.039574603s Feb 17 11:53:40.252: INFO: Pod "downwardapi-volume-1fe64b73-517c-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.057441143s Feb 17 11:53:42.269: INFO: Pod "downwardapi-volume-1fe64b73-517c-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.074403218s Feb 17 11:53:44.457: INFO: Pod "downwardapi-volume-1fe64b73-517c-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 10.262920688s Feb 17 11:53:46.846: INFO: Pod "downwardapi-volume-1fe64b73-517c-11ea-a180-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.651916665s STEP: Saw pod success Feb 17 11:53:46.847: INFO: Pod "downwardapi-volume-1fe64b73-517c-11ea-a180-0242ac110008" satisfied condition "success or failure" Feb 17 11:53:46.904: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-1fe64b73-517c-11ea-a180-0242ac110008 container client-container: STEP: delete the pod Feb 17 11:53:47.030: INFO: Waiting for pod downwardapi-volume-1fe64b73-517c-11ea-a180-0242ac110008 to disappear Feb 17 11:53:47.077: INFO: Pod downwardapi-volume-1fe64b73-517c-11ea-a180-0242ac110008 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 17 11:53:47.077: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-kjk67" for this suite. Feb 17 11:53:55.144: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 17 11:53:55.259: INFO: namespace: e2e-tests-projected-kjk67, resource: bindings, ignored listing per whitelist Feb 17 11:53:55.316: INFO: namespace e2e-tests-projected-kjk67 deletion completed in 8.232493331s • [SLOW TEST:21.495 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 17 11:53:55.316: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-2caa64e2-517c-11ea-a180-0242ac110008 STEP: Creating a pod to test consume secrets Feb 17 11:53:55.852: INFO: Waiting up to 5m0s for pod "pod-secrets-2cd749d7-517c-11ea-a180-0242ac110008" in namespace "e2e-tests-secrets-pxz5q" to be "success or failure" Feb 17 11:53:55.921: INFO: Pod "pod-secrets-2cd749d7-517c-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 68.35492ms Feb 17 11:53:58.306: INFO: Pod "pod-secrets-2cd749d7-517c-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.454266134s Feb 17 11:54:00.324: INFO: Pod "pod-secrets-2cd749d7-517c-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.471404858s Feb 17 11:54:02.950: INFO: Pod "pod-secrets-2cd749d7-517c-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 7.097608419s Feb 17 11:54:04.964: INFO: Pod "pod-secrets-2cd749d7-517c-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 9.111504277s Feb 17 11:54:06.988: INFO: Pod "pod-secrets-2cd749d7-517c-11ea-a180-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.135290822s STEP: Saw pod success Feb 17 11:54:06.988: INFO: Pod "pod-secrets-2cd749d7-517c-11ea-a180-0242ac110008" satisfied condition "success or failure" Feb 17 11:54:07.008: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-2cd749d7-517c-11ea-a180-0242ac110008 container secret-volume-test: STEP: delete the pod Feb 17 11:54:07.202: INFO: Waiting for pod pod-secrets-2cd749d7-517c-11ea-a180-0242ac110008 to disappear Feb 17 11:54:07.216: INFO: Pod pod-secrets-2cd749d7-517c-11ea-a180-0242ac110008 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 17 11:54:07.216: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-pxz5q" for this suite. Feb 17 11:54:15.261: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 17 11:54:15.348: INFO: namespace: e2e-tests-secrets-pxz5q, resource: bindings, ignored listing per whitelist Feb 17 11:54:15.498: INFO: namespace e2e-tests-secrets-pxz5q deletion completed in 8.268888348s STEP: Destroying namespace "e2e-tests-secret-namespace-wsjwf" for this suite. Feb 17 11:54:21.599: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 17 11:54:21.663: INFO: namespace: e2e-tests-secret-namespace-wsjwf, resource: bindings, ignored listing per whitelist Feb 17 11:54:21.808: INFO: namespace e2e-tests-secret-namespace-wsjwf deletion completed in 6.310639617s • [SLOW TEST:26.492 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 17 11:54:21.810: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Feb 17 11:54:22.082: INFO: Waiting up to 5m0s for pod "downwardapi-volume-3c7a157b-517c-11ea-a180-0242ac110008" in namespace "e2e-tests-projected-87252" to be "success or failure" Feb 17 11:54:22.091: INFO: Pod "downwardapi-volume-3c7a157b-517c-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.585361ms Feb 17 11:54:24.166: INFO: Pod "downwardapi-volume-3c7a157b-517c-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.083896083s Feb 17 11:54:26.193: INFO: Pod "downwardapi-volume-3c7a157b-517c-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.111142301s Feb 17 11:54:28.382: INFO: Pod "downwardapi-volume-3c7a157b-517c-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.299930396s Feb 17 11:54:30.399: INFO: Pod "downwardapi-volume-3c7a157b-517c-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.316929722s Feb 17 11:54:32.416: INFO: Pod "downwardapi-volume-3c7a157b-517c-11ea-a180-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.334174956s STEP: Saw pod success Feb 17 11:54:32.416: INFO: Pod "downwardapi-volume-3c7a157b-517c-11ea-a180-0242ac110008" satisfied condition "success or failure" Feb 17 11:54:32.425: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-3c7a157b-517c-11ea-a180-0242ac110008 container client-container: STEP: delete the pod Feb 17 11:54:32.852: INFO: Waiting for pod downwardapi-volume-3c7a157b-517c-11ea-a180-0242ac110008 to disappear Feb 17 11:54:32.879: INFO: Pod downwardapi-volume-3c7a157b-517c-11ea-a180-0242ac110008 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 17 11:54:32.879: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-87252" for this suite. Feb 17 11:54:39.030: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 17 11:54:39.197: INFO: namespace: e2e-tests-projected-87252, resource: bindings, ignored listing per whitelist Feb 17 11:54:39.256: INFO: namespace e2e-tests-projected-87252 deletion completed in 6.370818643s • [SLOW TEST:17.446 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Proxy server should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 17 11:54:39.257: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: starting the proxy server Feb 17 11:54:39.600: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter' STEP: curling proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 17 11:54:39.726: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-qv8hd" for this suite. Feb 17 11:54:45.859: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 17 11:54:45.895: INFO: namespace: e2e-tests-kubectl-qv8hd, resource: bindings, ignored listing per whitelist Feb 17 11:54:46.059: INFO: namespace e2e-tests-kubectl-qv8hd deletion completed in 6.30636672s • [SLOW TEST:6.802 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Proxy server /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 17 11:54:46.060: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Feb 17 11:54:46.279: INFO: Waiting up to 5m0s for pod "downwardapi-volume-4ae47822-517c-11ea-a180-0242ac110008" in namespace "e2e-tests-downward-api-fdwv4" to be "success or failure" Feb 17 11:54:46.371: INFO: Pod "downwardapi-volume-4ae47822-517c-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 91.906062ms Feb 17 11:54:48.385: INFO: Pod "downwardapi-volume-4ae47822-517c-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.105245272s Feb 17 11:54:50.403: INFO: Pod "downwardapi-volume-4ae47822-517c-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.123544431s Feb 17 11:54:52.555: INFO: Pod "downwardapi-volume-4ae47822-517c-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.275804354s Feb 17 11:54:54.589: INFO: Pod "downwardapi-volume-4ae47822-517c-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.309371858s Feb 17 11:54:56.631: INFO: Pod "downwardapi-volume-4ae47822-517c-11ea-a180-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.35140175s STEP: Saw pod success Feb 17 11:54:56.631: INFO: Pod "downwardapi-volume-4ae47822-517c-11ea-a180-0242ac110008" satisfied condition "success or failure" Feb 17 11:54:56.643: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-4ae47822-517c-11ea-a180-0242ac110008 container client-container: STEP: delete the pod Feb 17 11:54:57.565: INFO: Waiting for pod downwardapi-volume-4ae47822-517c-11ea-a180-0242ac110008 to disappear Feb 17 11:54:58.070: INFO: Pod downwardapi-volume-4ae47822-517c-11ea-a180-0242ac110008 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 17 11:54:58.070: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-fdwv4" for this suite. Feb 17 11:55:06.293: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 17 11:55:06.347: INFO: namespace: e2e-tests-downward-api-fdwv4, resource: bindings, ignored listing per whitelist Feb 17 11:55:06.581: INFO: namespace e2e-tests-downward-api-fdwv4 deletion completed in 8.493894811s • [SLOW TEST:20.522 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl api-versions should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 17 11:55:06.583: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: validating api versions Feb 17 11:55:06.828: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config api-versions' Feb 17 11:55:07.016: INFO: stderr: "" Feb 17 11:55:07.016: INFO: stdout: "admissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\napps/v1beta1\napps/v1beta2\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 17 11:55:07.016: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-c7b5n" for this suite. Feb 17 11:55:13.091: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 17 11:55:13.249: INFO: namespace: e2e-tests-kubectl-c7b5n, resource: bindings, ignored listing per whitelist Feb 17 11:55:13.310: INFO: namespace e2e-tests-kubectl-c7b5n deletion completed in 6.281193124s • [SLOW TEST:6.727 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl api-versions /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 17 11:55:13.311: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name s-test-opt-del-5b229507-517c-11ea-a180-0242ac110008 STEP: Creating secret with name s-test-opt-upd-5b2295e2-517c-11ea-a180-0242ac110008 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-5b229507-517c-11ea-a180-0242ac110008 STEP: Updating secret s-test-opt-upd-5b2295e2-517c-11ea-a180-0242ac110008 STEP: Creating secret with name s-test-opt-create-5b22962c-517c-11ea-a180-0242ac110008 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 17 11:55:30.172: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-lpnx6" for this suite. Feb 17 11:55:54.289: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 17 11:55:54.353: INFO: namespace: e2e-tests-projected-lpnx6, resource: bindings, ignored listing per whitelist Feb 17 11:55:54.443: INFO: namespace e2e-tests-projected-lpnx6 deletion completed in 24.238211157s • [SLOW TEST:41.133 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 17 11:55:54.445: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap e2e-tests-configmap-brthp/configmap-test-73bb39f8-517c-11ea-a180-0242ac110008 STEP: Creating a pod to test consume configMaps Feb 17 11:55:54.884: INFO: Waiting up to 5m0s for pod "pod-configmaps-73c61354-517c-11ea-a180-0242ac110008" in namespace "e2e-tests-configmap-brthp" to be "success or failure" Feb 17 11:55:54.911: INFO: Pod "pod-configmaps-73c61354-517c-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 26.846581ms Feb 17 11:55:57.176: INFO: Pod "pod-configmaps-73c61354-517c-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.291470019s Feb 17 11:55:59.192: INFO: Pod "pod-configmaps-73c61354-517c-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.307884115s Feb 17 11:56:01.263: INFO: Pod "pod-configmaps-73c61354-517c-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.378267745s Feb 17 11:56:03.279: INFO: Pod "pod-configmaps-73c61354-517c-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.395069005s Feb 17 11:56:05.312: INFO: Pod "pod-configmaps-73c61354-517c-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 10.427265716s Feb 17 11:56:07.331: INFO: Pod "pod-configmaps-73c61354-517c-11ea-a180-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.447035186s STEP: Saw pod success Feb 17 11:56:07.332: INFO: Pod "pod-configmaps-73c61354-517c-11ea-a180-0242ac110008" satisfied condition "success or failure" Feb 17 11:56:07.340: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-73c61354-517c-11ea-a180-0242ac110008 container env-test: STEP: delete the pod Feb 17 11:56:07.450: INFO: Waiting for pod pod-configmaps-73c61354-517c-11ea-a180-0242ac110008 to disappear Feb 17 11:56:07.479: INFO: Pod pod-configmaps-73c61354-517c-11ea-a180-0242ac110008 no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 17 11:56:07.479: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-brthp" for this suite. Feb 17 11:56:13.525: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 17 11:56:13.618: INFO: namespace: e2e-tests-configmap-brthp, resource: bindings, ignored listing per whitelist Feb 17 11:56:13.721: INFO: namespace e2e-tests-configmap-brthp deletion completed in 6.233033317s • [SLOW TEST:19.276 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 17 11:56:13.721: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Feb 17 11:56:14.052: INFO: Waiting up to 5m0s for pod "downwardapi-volume-7f37d0a9-517c-11ea-a180-0242ac110008" in namespace "e2e-tests-projected-nlc6q" to be "success or failure" Feb 17 11:56:14.149: INFO: Pod "downwardapi-volume-7f37d0a9-517c-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 96.211829ms Feb 17 11:56:16.274: INFO: Pod "downwardapi-volume-7f37d0a9-517c-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.221990224s Feb 17 11:56:18.292: INFO: Pod "downwardapi-volume-7f37d0a9-517c-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.239062545s Feb 17 11:56:20.449: INFO: Pod "downwardapi-volume-7f37d0a9-517c-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.396733367s Feb 17 11:56:22.513: INFO: Pod "downwardapi-volume-7f37d0a9-517c-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.460527896s Feb 17 11:56:24.534: INFO: Pod "downwardapi-volume-7f37d0a9-517c-11ea-a180-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.481054754s STEP: Saw pod success Feb 17 11:56:24.534: INFO: Pod "downwardapi-volume-7f37d0a9-517c-11ea-a180-0242ac110008" satisfied condition "success or failure" Feb 17 11:56:24.568: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-7f37d0a9-517c-11ea-a180-0242ac110008 container client-container: STEP: delete the pod Feb 17 11:56:25.461: INFO: Waiting for pod downwardapi-volume-7f37d0a9-517c-11ea-a180-0242ac110008 to disappear Feb 17 11:56:25.785: INFO: Pod downwardapi-volume-7f37d0a9-517c-11ea-a180-0242ac110008 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 17 11:56:25.786: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-nlc6q" for this suite. Feb 17 11:56:31.878: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 17 11:56:32.009: INFO: namespace: e2e-tests-projected-nlc6q, resource: bindings, ignored listing per whitelist Feb 17 11:56:32.044: INFO: namespace e2e-tests-projected-nlc6q deletion completed in 6.242977119s • [SLOW TEST:18.323 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl rolling-update should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 17 11:56:32.045: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1358 [It] should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine Feb 17 11:56:32.256: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=e2e-tests-kubectl-429vg' Feb 17 11:56:34.185: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Feb 17 11:56:34.186: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n" STEP: verifying the rc e2e-test-nginx-rc was created Feb 17 11:56:34.260: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 0 spec.replicas 1 status.replicas 0 Feb 17 11:56:34.317: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0 STEP: rolling-update to same image controller Feb 17 11:56:34.349: INFO: scanned /root for discovery docs: Feb 17 11:56:34.349: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update e2e-test-nginx-rc --update-period=1s --image=docker.io/library/nginx:1.14-alpine --image-pull-policy=IfNotPresent --namespace=e2e-tests-kubectl-429vg' Feb 17 11:56:59.654: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" Feb 17 11:56:59.655: INFO: stdout: "Created e2e-test-nginx-rc-92dcd5e09ba8c5c51fc5f03aeb91af98\nScaling up e2e-test-nginx-rc-92dcd5e09ba8c5c51fc5f03aeb91af98 from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-92dcd5e09ba8c5c51fc5f03aeb91af98 up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-92dcd5e09ba8c5c51fc5f03aeb91af98 to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n" Feb 17 11:56:59.655: INFO: stdout: "Created e2e-test-nginx-rc-92dcd5e09ba8c5c51fc5f03aeb91af98\nScaling up e2e-test-nginx-rc-92dcd5e09ba8c5c51fc5f03aeb91af98 from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-92dcd5e09ba8c5c51fc5f03aeb91af98 up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-92dcd5e09ba8c5c51fc5f03aeb91af98 to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n" STEP: waiting for all containers in run=e2e-test-nginx-rc pods to come up. Feb 17 11:56:59.655: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=e2e-tests-kubectl-429vg' Feb 17 11:56:59.791: INFO: stderr: "" Feb 17 11:56:59.791: INFO: stdout: "e2e-test-nginx-rc-8qcv9 e2e-test-nginx-rc-92dcd5e09ba8c5c51fc5f03aeb91af98-99s2r " STEP: Replicas for run=e2e-test-nginx-rc: expected=1 actual=2 Feb 17 11:57:04.793: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=e2e-tests-kubectl-429vg' Feb 17 11:57:04.966: INFO: stderr: "" Feb 17 11:57:04.967: INFO: stdout: "e2e-test-nginx-rc-92dcd5e09ba8c5c51fc5f03aeb91af98-99s2r " Feb 17 11:57:04.967: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-92dcd5e09ba8c5c51fc5f03aeb91af98-99s2r -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-nginx-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-429vg' Feb 17 11:57:05.095: INFO: stderr: "" Feb 17 11:57:05.095: INFO: stdout: "true" Feb 17 11:57:05.095: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-92dcd5e09ba8c5c51fc5f03aeb91af98-99s2r -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-nginx-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-429vg' Feb 17 11:57:05.208: INFO: stderr: "" Feb 17 11:57:05.208: INFO: stdout: "docker.io/library/nginx:1.14-alpine" Feb 17 11:57:05.208: INFO: e2e-test-nginx-rc-92dcd5e09ba8c5c51fc5f03aeb91af98-99s2r is verified up and running [AfterEach] [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1364 Feb 17 11:57:05.209: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=e2e-tests-kubectl-429vg' Feb 17 11:57:05.380: INFO: stderr: "" Feb 17 11:57:05.380: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 17 11:57:05.380: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-429vg" for this suite. Feb 17 11:57:29.474: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 17 11:57:29.657: INFO: namespace: e2e-tests-kubectl-429vg, resource: bindings, ignored listing per whitelist Feb 17 11:57:29.688: INFO: namespace e2e-tests-kubectl-429vg deletion completed in 24.293953847s • [SLOW TEST:57.643 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 17 11:57:29.689: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Feb 17 11:57:40.667: INFO: Successfully updated pod "pod-update-activedeadlineseconds-ac774741-517c-11ea-a180-0242ac110008" Feb 17 11:57:40.667: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-ac774741-517c-11ea-a180-0242ac110008" in namespace "e2e-tests-pods-2h7p6" to be "terminated due to deadline exceeded" Feb 17 11:57:40.679: INFO: Pod "pod-update-activedeadlineseconds-ac774741-517c-11ea-a180-0242ac110008": Phase="Running", Reason="", readiness=true. Elapsed: 11.474875ms Feb 17 11:57:42.747: INFO: Pod "pod-update-activedeadlineseconds-ac774741-517c-11ea-a180-0242ac110008": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.079707059s Feb 17 11:57:42.747: INFO: Pod "pod-update-activedeadlineseconds-ac774741-517c-11ea-a180-0242ac110008" satisfied condition "terminated due to deadline exceeded" [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 17 11:57:42.747: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-2h7p6" for this suite. Feb 17 11:57:48.794: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 17 11:57:48.868: INFO: namespace: e2e-tests-pods-2h7p6, resource: bindings, ignored listing per whitelist Feb 17 11:57:48.939: INFO: namespace e2e-tests-pods-2h7p6 deletion completed in 6.176328567s • [SLOW TEST:19.251 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 17 11:57:48.940: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Feb 17 11:57:49.236: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b7f35e23-517c-11ea-a180-0242ac110008" in namespace "e2e-tests-downward-api-xssf8" to be "success or failure" Feb 17 11:57:49.376: INFO: Pod "downwardapi-volume-b7f35e23-517c-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 139.31401ms Feb 17 11:57:51.398: INFO: Pod "downwardapi-volume-b7f35e23-517c-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.161616955s Feb 17 11:57:53.417: INFO: Pod "downwardapi-volume-b7f35e23-517c-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.180612919s Feb 17 11:57:55.640: INFO: Pod "downwardapi-volume-b7f35e23-517c-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.40362326s Feb 17 11:57:58.173: INFO: Pod "downwardapi-volume-b7f35e23-517c-11ea-a180-0242ac110008": Phase="Running", Reason="", readiness=true. Elapsed: 8.936154893s Feb 17 11:58:00.187: INFO: Pod "downwardapi-volume-b7f35e23-517c-11ea-a180-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.950019682s STEP: Saw pod success Feb 17 11:58:00.187: INFO: Pod "downwardapi-volume-b7f35e23-517c-11ea-a180-0242ac110008" satisfied condition "success or failure" Feb 17 11:58:00.191: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-b7f35e23-517c-11ea-a180-0242ac110008 container client-container: STEP: delete the pod Feb 17 11:58:00.369: INFO: Waiting for pod downwardapi-volume-b7f35e23-517c-11ea-a180-0242ac110008 to disappear Feb 17 11:58:00.374: INFO: Pod downwardapi-volume-b7f35e23-517c-11ea-a180-0242ac110008 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 17 11:58:00.374: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-xssf8" for this suite. Feb 17 11:58:06.413: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 17 11:58:06.494: INFO: namespace: e2e-tests-downward-api-xssf8, resource: bindings, ignored listing per whitelist Feb 17 11:58:06.682: INFO: namespace e2e-tests-downward-api-xssf8 deletion completed in 6.302425173s • [SLOW TEST:17.743 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 17 11:58:06.683: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-map-c2773ed9-517c-11ea-a180-0242ac110008 STEP: Creating a pod to test consume configMaps Feb 17 11:58:06.908: INFO: Waiting up to 5m0s for pod "pod-configmaps-c278b659-517c-11ea-a180-0242ac110008" in namespace "e2e-tests-configmap-wb6xf" to be "success or failure" Feb 17 11:58:06.958: INFO: Pod "pod-configmaps-c278b659-517c-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 50.524428ms Feb 17 11:58:09.229: INFO: Pod "pod-configmaps-c278b659-517c-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.321105278s Feb 17 11:58:11.251: INFO: Pod "pod-configmaps-c278b659-517c-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.343466369s Feb 17 11:58:13.539: INFO: Pod "pod-configmaps-c278b659-517c-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.631392312s Feb 17 11:58:15.956: INFO: Pod "pod-configmaps-c278b659-517c-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 9.047930296s Feb 17 11:58:17.989: INFO: Pod "pod-configmaps-c278b659-517c-11ea-a180-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.081184373s STEP: Saw pod success Feb 17 11:58:17.989: INFO: Pod "pod-configmaps-c278b659-517c-11ea-a180-0242ac110008" satisfied condition "success or failure" Feb 17 11:58:18.016: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-c278b659-517c-11ea-a180-0242ac110008 container configmap-volume-test: STEP: delete the pod Feb 17 11:58:18.241: INFO: Waiting for pod pod-configmaps-c278b659-517c-11ea-a180-0242ac110008 to disappear Feb 17 11:58:18.258: INFO: Pod pod-configmaps-c278b659-517c-11ea-a180-0242ac110008 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 17 11:58:18.258: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-wb6xf" for this suite. Feb 17 11:58:24.387: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 17 11:58:24.414: INFO: namespace: e2e-tests-configmap-wb6xf, resource: bindings, ignored listing per whitelist Feb 17 11:58:24.736: INFO: namespace e2e-tests-configmap-wb6xf deletion completed in 6.467312439s • [SLOW TEST:18.053 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 17 11:58:24.737: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43 [It] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod Feb 17 11:58:25.170: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 17 11:58:51.980: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-init-container-n7cw6" for this suite. Feb 17 11:59:16.086: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 17 11:59:16.217: INFO: namespace: e2e-tests-init-container-n7cw6, resource: bindings, ignored listing per whitelist Feb 17 11:59:16.248: INFO: namespace e2e-tests-init-container-n7cw6 deletion completed in 24.201430358s • [SLOW TEST:51.511 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 17 11:59:16.248: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a watch on configmaps STEP: creating a new configmap STEP: modifying the configmap once STEP: closing the watch once it receives two notifications Feb 17 11:59:16.549: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-v4hxk,SelfLink:/api/v1/namespaces/e2e-tests-watch-v4hxk/configmaps/e2e-watch-test-watch-closed,UID:ebf49ff5-517c-11ea-a994-fa163e34d433,ResourceVersion:21974972,Generation:0,CreationTimestamp:2020-02-17 11:59:16 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} Feb 17 11:59:16.550: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-v4hxk,SelfLink:/api/v1/namespaces/e2e-tests-watch-v4hxk/configmaps/e2e-watch-test-watch-closed,UID:ebf49ff5-517c-11ea-a994-fa163e34d433,ResourceVersion:21974973,Generation:0,CreationTimestamp:2020-02-17 11:59:16 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time, while the watch is closed STEP: creating a new watch on configmaps from the last resource version observed by the first watch STEP: deleting the configmap STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed Feb 17 11:59:16.600: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-v4hxk,SelfLink:/api/v1/namespaces/e2e-tests-watch-v4hxk/configmaps/e2e-watch-test-watch-closed,UID:ebf49ff5-517c-11ea-a994-fa163e34d433,ResourceVersion:21974974,Generation:0,CreationTimestamp:2020-02-17 11:59:16 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Feb 17 11:59:16.600: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-v4hxk,SelfLink:/api/v1/namespaces/e2e-tests-watch-v4hxk/configmaps/e2e-watch-test-watch-closed,UID:ebf49ff5-517c-11ea-a994-fa163e34d433,ResourceVersion:21974975,Generation:0,CreationTimestamp:2020-02-17 11:59:16 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 17 11:59:16.600: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-watch-v4hxk" for this suite. Feb 17 11:59:22.705: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 17 11:59:22.896: INFO: namespace: e2e-tests-watch-v4hxk, resource: bindings, ignored listing per whitelist Feb 17 11:59:22.939: INFO: namespace e2e-tests-watch-v4hxk deletion completed in 6.315491651s • [SLOW TEST:6.691 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 17 11:59:22.940: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0666 on tmpfs Feb 17 11:59:23.163: INFO: Waiting up to 5m0s for pod "pod-efee85bd-517c-11ea-a180-0242ac110008" in namespace "e2e-tests-emptydir-fmkxn" to be "success or failure" Feb 17 11:59:23.169: INFO: Pod "pod-efee85bd-517c-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 5.502631ms Feb 17 11:59:25.499: INFO: Pod "pod-efee85bd-517c-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.33564435s Feb 17 11:59:27.529: INFO: Pod "pod-efee85bd-517c-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.365876512s Feb 17 11:59:29.548: INFO: Pod "pod-efee85bd-517c-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.385128433s Feb 17 11:59:31.560: INFO: Pod "pod-efee85bd-517c-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.397059358s Feb 17 11:59:33.885: INFO: Pod "pod-efee85bd-517c-11ea-a180-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.721820725s STEP: Saw pod success Feb 17 11:59:33.885: INFO: Pod "pod-efee85bd-517c-11ea-a180-0242ac110008" satisfied condition "success or failure" Feb 17 11:59:33.897: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-efee85bd-517c-11ea-a180-0242ac110008 container test-container: STEP: delete the pod Feb 17 11:59:34.245: INFO: Waiting for pod pod-efee85bd-517c-11ea-a180-0242ac110008 to disappear Feb 17 11:59:34.254: INFO: Pod pod-efee85bd-517c-11ea-a180-0242ac110008 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 17 11:59:34.254: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-fmkxn" for this suite. Feb 17 11:59:40.299: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 17 11:59:40.459: INFO: namespace: e2e-tests-emptydir-fmkxn, resource: bindings, ignored listing per whitelist Feb 17 11:59:40.504: INFO: namespace e2e-tests-emptydir-fmkxn deletion completed in 6.243596509s • [SLOW TEST:17.565 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run deployment should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 17 11:59:40.506: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1399 [It] should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine Feb 17 11:59:40.770: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --generator=deployment/v1beta1 --namespace=e2e-tests-kubectl-tnr4l' Feb 17 11:59:40.945: INFO: stderr: "kubectl run --generator=deployment/v1beta1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Feb 17 11:59:40.945: INFO: stdout: "deployment.extensions/e2e-test-nginx-deployment created\n" STEP: verifying the deployment e2e-test-nginx-deployment was created STEP: verifying the pod controlled by deployment e2e-test-nginx-deployment was created [AfterEach] [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1404 Feb 17 11:59:45.011: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=e2e-tests-kubectl-tnr4l' Feb 17 11:59:45.218: INFO: stderr: "" Feb 17 11:59:45.218: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 17 11:59:45.219: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-tnr4l" for this suite. Feb 17 11:59:51.274: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 17 11:59:51.418: INFO: namespace: e2e-tests-kubectl-tnr4l, resource: bindings, ignored listing per whitelist Feb 17 11:59:51.463: INFO: namespace e2e-tests-kubectl-tnr4l deletion completed in 6.232304267s • [SLOW TEST:10.958 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-storage] Projected downwardAPI should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 17 11:59:51.464: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Feb 17 11:59:51.717: INFO: Waiting up to 5m0s for pod "downwardapi-volume-00f2178a-517d-11ea-a180-0242ac110008" in namespace "e2e-tests-projected-thkfx" to be "success or failure" Feb 17 11:59:51.744: INFO: Pod "downwardapi-volume-00f2178a-517d-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 27.606455ms Feb 17 11:59:53.762: INFO: Pod "downwardapi-volume-00f2178a-517d-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.044971573s Feb 17 11:59:55.778: INFO: Pod "downwardapi-volume-00f2178a-517d-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.06137645s Feb 17 11:59:57.794: INFO: Pod "downwardapi-volume-00f2178a-517d-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.077135316s Feb 17 12:00:00.382: INFO: Pod "downwardapi-volume-00f2178a-517d-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.664968698s Feb 17 12:00:02.396: INFO: Pod "downwardapi-volume-00f2178a-517d-11ea-a180-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.679525461s STEP: Saw pod success Feb 17 12:00:02.397: INFO: Pod "downwardapi-volume-00f2178a-517d-11ea-a180-0242ac110008" satisfied condition "success or failure" Feb 17 12:00:02.401: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-00f2178a-517d-11ea-a180-0242ac110008 container client-container: STEP: delete the pod Feb 17 12:00:02.457: INFO: Waiting for pod downwardapi-volume-00f2178a-517d-11ea-a180-0242ac110008 to disappear Feb 17 12:00:02.470: INFO: Pod downwardapi-volume-00f2178a-517d-11ea-a180-0242ac110008 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 17 12:00:02.470: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-thkfx" for this suite. Feb 17 12:00:08.645: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 17 12:00:08.787: INFO: namespace: e2e-tests-projected-thkfx, resource: bindings, ignored listing per whitelist Feb 17 12:00:08.810: INFO: namespace e2e-tests-projected-thkfx deletion completed in 6.218851695s • [SLOW TEST:17.347 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-storage] Downward API volume should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 17 12:00:08.811: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Feb 17 12:00:09.074: INFO: Waiting up to 5m0s for pod "downwardapi-volume-0b4d0b1a-517d-11ea-a180-0242ac110008" in namespace "e2e-tests-downward-api-c8r74" to be "success or failure" Feb 17 12:00:09.142: INFO: Pod "downwardapi-volume-0b4d0b1a-517d-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 68.20522ms Feb 17 12:00:11.353: INFO: Pod "downwardapi-volume-0b4d0b1a-517d-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.279089858s Feb 17 12:00:13.377: INFO: Pod "downwardapi-volume-0b4d0b1a-517d-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.30313668s Feb 17 12:00:16.009: INFO: Pod "downwardapi-volume-0b4d0b1a-517d-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.934890078s Feb 17 12:00:18.113: INFO: Pod "downwardapi-volume-0b4d0b1a-517d-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 9.038536423s Feb 17 12:00:20.129: INFO: Pod "downwardapi-volume-0b4d0b1a-517d-11ea-a180-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.054963197s STEP: Saw pod success Feb 17 12:00:20.129: INFO: Pod "downwardapi-volume-0b4d0b1a-517d-11ea-a180-0242ac110008" satisfied condition "success or failure" Feb 17 12:00:20.134: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-0b4d0b1a-517d-11ea-a180-0242ac110008 container client-container: STEP: delete the pod Feb 17 12:00:20.725: INFO: Waiting for pod downwardapi-volume-0b4d0b1a-517d-11ea-a180-0242ac110008 to disappear Feb 17 12:00:21.244: INFO: Pod downwardapi-volume-0b4d0b1a-517d-11ea-a180-0242ac110008 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 17 12:00:21.244: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-c8r74" for this suite. Feb 17 12:00:27.977: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 17 12:00:28.152: INFO: namespace: e2e-tests-downward-api-c8r74, resource: bindings, ignored listing per whitelist Feb 17 12:00:28.161: INFO: namespace e2e-tests-downward-api-c8r74 deletion completed in 6.472509529s • [SLOW TEST:19.350 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 17 12:00:28.162: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-map-16ca69be-517d-11ea-a180-0242ac110008 STEP: Creating a pod to test consume configMaps Feb 17 12:00:28.367: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-16cb1c34-517d-11ea-a180-0242ac110008" in namespace "e2e-tests-projected-bdp4l" to be "success or failure" Feb 17 12:00:28.419: INFO: Pod "pod-projected-configmaps-16cb1c34-517d-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 51.94146ms Feb 17 12:00:30.432: INFO: Pod "pod-projected-configmaps-16cb1c34-517d-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.065288509s Feb 17 12:00:32.452: INFO: Pod "pod-projected-configmaps-16cb1c34-517d-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.085335179s Feb 17 12:00:36.358: INFO: Pod "pod-projected-configmaps-16cb1c34-517d-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 7.990941921s Feb 17 12:00:38.381: INFO: Pod "pod-projected-configmaps-16cb1c34-517d-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 10.014255526s Feb 17 12:00:40.400: INFO: Pod "pod-projected-configmaps-16cb1c34-517d-11ea-a180-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.032503817s STEP: Saw pod success Feb 17 12:00:40.400: INFO: Pod "pod-projected-configmaps-16cb1c34-517d-11ea-a180-0242ac110008" satisfied condition "success or failure" Feb 17 12:00:40.408: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-16cb1c34-517d-11ea-a180-0242ac110008 container projected-configmap-volume-test: STEP: delete the pod Feb 17 12:00:40.507: INFO: Waiting for pod pod-projected-configmaps-16cb1c34-517d-11ea-a180-0242ac110008 to disappear Feb 17 12:00:40.516: INFO: Pod pod-projected-configmaps-16cb1c34-517d-11ea-a180-0242ac110008 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 17 12:00:40.516: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-bdp4l" for this suite. Feb 17 12:00:46.675: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 17 12:00:46.779: INFO: namespace: e2e-tests-projected-bdp4l, resource: bindings, ignored listing per whitelist Feb 17 12:00:46.917: INFO: namespace e2e-tests-projected-bdp4l deletion completed in 6.385493096s • [SLOW TEST:18.755 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 17 12:00:46.917: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79 Feb 17 12:00:47.272: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Feb 17 12:00:47.291: INFO: Waiting for terminating namespaces to be deleted... Feb 17 12:00:47.296: INFO: Logging pods the kubelet thinks is on node hunter-server-hu5at5svl7ps before test Feb 17 12:00:47.317: INFO: kube-proxy-bqnnz from kube-system started at 2019-08-04 08:33:23 +0000 UTC (1 container statuses recorded) Feb 17 12:00:47.317: INFO: Container kube-proxy ready: true, restart count 0 Feb 17 12:00:47.317: INFO: etcd-hunter-server-hu5at5svl7ps from kube-system started at (0 container statuses recorded) Feb 17 12:00:47.317: INFO: weave-net-tqwf2 from kube-system started at 2019-08-04 08:33:23 +0000 UTC (2 container statuses recorded) Feb 17 12:00:47.317: INFO: Container weave ready: true, restart count 0 Feb 17 12:00:47.317: INFO: Container weave-npc ready: true, restart count 0 Feb 17 12:00:47.317: INFO: coredns-54ff9cd656-bmkk4 from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded) Feb 17 12:00:47.317: INFO: Container coredns ready: true, restart count 0 Feb 17 12:00:47.317: INFO: kube-controller-manager-hunter-server-hu5at5svl7ps from kube-system started at (0 container statuses recorded) Feb 17 12:00:47.317: INFO: kube-apiserver-hunter-server-hu5at5svl7ps from kube-system started at (0 container statuses recorded) Feb 17 12:00:47.317: INFO: kube-scheduler-hunter-server-hu5at5svl7ps from kube-system started at (0 container statuses recorded) Feb 17 12:00:47.317: INFO: coredns-54ff9cd656-79kxx from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded) Feb 17 12:00:47.317: INFO: Container coredns ready: true, restart count 0 [It] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: verifying the node has the label node hunter-server-hu5at5svl7ps Feb 17 12:00:47.451: INFO: Pod coredns-54ff9cd656-79kxx requesting resource cpu=100m on Node hunter-server-hu5at5svl7ps Feb 17 12:00:47.452: INFO: Pod coredns-54ff9cd656-bmkk4 requesting resource cpu=100m on Node hunter-server-hu5at5svl7ps Feb 17 12:00:47.452: INFO: Pod etcd-hunter-server-hu5at5svl7ps requesting resource cpu=0m on Node hunter-server-hu5at5svl7ps Feb 17 12:00:47.452: INFO: Pod kube-apiserver-hunter-server-hu5at5svl7ps requesting resource cpu=250m on Node hunter-server-hu5at5svl7ps Feb 17 12:00:47.452: INFO: Pod kube-controller-manager-hunter-server-hu5at5svl7ps requesting resource cpu=200m on Node hunter-server-hu5at5svl7ps Feb 17 12:00:47.452: INFO: Pod kube-proxy-bqnnz requesting resource cpu=0m on Node hunter-server-hu5at5svl7ps Feb 17 12:00:47.452: INFO: Pod kube-scheduler-hunter-server-hu5at5svl7ps requesting resource cpu=100m on Node hunter-server-hu5at5svl7ps Feb 17 12:00:47.452: INFO: Pod weave-net-tqwf2 requesting resource cpu=20m on Node hunter-server-hu5at5svl7ps STEP: Starting Pods to consume most of the cluster CPU. STEP: Creating another pod that requires unavailable amount of CPU. STEP: Considering event: Type = [Normal], Name = [filler-pod-222f7de0-517d-11ea-a180-0242ac110008.15f42ed1be1255d7], Reason = [Scheduled], Message = [Successfully assigned e2e-tests-sched-pred-t7bc8/filler-pod-222f7de0-517d-11ea-a180-0242ac110008 to hunter-server-hu5at5svl7ps] STEP: Considering event: Type = [Normal], Name = [filler-pod-222f7de0-517d-11ea-a180-0242ac110008.15f42ed2cd30aad0], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-222f7de0-517d-11ea-a180-0242ac110008.15f42ed36d7bbd62], Reason = [Created], Message = [Created container] STEP: Considering event: Type = [Normal], Name = [filler-pod-222f7de0-517d-11ea-a180-0242ac110008.15f42ed390ec355c], Reason = [Started], Message = [Started container] STEP: Considering event: Type = [Warning], Name = [additional-pod.15f42ed418e7cdc5], Reason = [FailedScheduling], Message = [0/1 nodes are available: 1 Insufficient cpu.] STEP: removing the label node off the node hunter-server-hu5at5svl7ps STEP: verifying the node doesn't have the label node [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 17 12:00:58.853: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-sched-pred-t7bc8" for this suite. Feb 17 12:01:04.921: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 17 12:01:05.060: INFO: namespace: e2e-tests-sched-pred-t7bc8, resource: bindings, ignored listing per whitelist Feb 17 12:01:06.177: INFO: namespace e2e-tests-sched-pred-t7bc8 deletion completed in 7.294298338s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70 • [SLOW TEST:19.260 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22 validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 17 12:01:06.178: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Feb 17 12:01:18.706: INFO: Waiting up to 5m0s for pod "client-envvars-34cc0433-517d-11ea-a180-0242ac110008" in namespace "e2e-tests-pods-st6c4" to be "success or failure" Feb 17 12:01:18.724: INFO: Pod "client-envvars-34cc0433-517d-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 16.892332ms Feb 17 12:01:20.750: INFO: Pod "client-envvars-34cc0433-517d-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.042771847s Feb 17 12:01:22.767: INFO: Pod "client-envvars-34cc0433-517d-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.060262806s Feb 17 12:01:24.793: INFO: Pod "client-envvars-34cc0433-517d-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.0859393s Feb 17 12:01:26.860: INFO: Pod "client-envvars-34cc0433-517d-11ea-a180-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.153316273s STEP: Saw pod success Feb 17 12:01:26.860: INFO: Pod "client-envvars-34cc0433-517d-11ea-a180-0242ac110008" satisfied condition "success or failure" Feb 17 12:01:26.879: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod client-envvars-34cc0433-517d-11ea-a180-0242ac110008 container env3cont: STEP: delete the pod Feb 17 12:01:27.050: INFO: Waiting for pod client-envvars-34cc0433-517d-11ea-a180-0242ac110008 to disappear Feb 17 12:01:27.128: INFO: Pod client-envvars-34cc0433-517d-11ea-a180-0242ac110008 no longer exists [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 17 12:01:27.129: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-st6c4" for this suite. Feb 17 12:02:11.176: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 17 12:02:11.224: INFO: namespace: e2e-tests-pods-st6c4, resource: bindings, ignored listing per whitelist Feb 17 12:02:11.489: INFO: namespace e2e-tests-pods-st6c4 deletion completed in 44.350995577s • [SLOW TEST:65.312 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 17 12:02:11.490: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102 [It] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Feb 17 12:02:11.946: INFO: Creating daemon "daemon-set" with a node selector STEP: Initially, daemon pods should not be running on any nodes. Feb 17 12:02:11.974: INFO: Number of nodes with available pods: 0 Feb 17 12:02:11.975: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Change node label to blue, check that daemon pod is launched. Feb 17 12:02:12.020: INFO: Number of nodes with available pods: 0 Feb 17 12:02:12.020: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 17 12:02:13.682: INFO: Number of nodes with available pods: 0 Feb 17 12:02:13.682: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 17 12:02:14.037: INFO: Number of nodes with available pods: 0 Feb 17 12:02:14.037: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 17 12:02:15.040: INFO: Number of nodes with available pods: 0 Feb 17 12:02:15.040: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 17 12:02:16.031: INFO: Number of nodes with available pods: 0 Feb 17 12:02:16.031: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 17 12:02:19.751: INFO: Number of nodes with available pods: 0 Feb 17 12:02:19.752: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 17 12:02:20.544: INFO: Number of nodes with available pods: 0 Feb 17 12:02:20.545: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 17 12:02:21.032: INFO: Number of nodes with available pods: 0 Feb 17 12:02:21.032: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 17 12:02:22.034: INFO: Number of nodes with available pods: 0 Feb 17 12:02:22.034: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 17 12:02:23.228: INFO: Number of nodes with available pods: 0 Feb 17 12:02:23.228: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 17 12:02:24.029: INFO: Number of nodes with available pods: 1 Feb 17 12:02:24.029: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Update the node label to green, and wait for daemons to be unscheduled Feb 17 12:02:24.068: INFO: Number of nodes with available pods: 1 Feb 17 12:02:24.068: INFO: Number of running nodes: 0, number of available pods: 1 Feb 17 12:02:25.078: INFO: Number of nodes with available pods: 0 Feb 17 12:02:25.078: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate Feb 17 12:02:25.105: INFO: Number of nodes with available pods: 0 Feb 17 12:02:25.105: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 17 12:02:26.121: INFO: Number of nodes with available pods: 0 Feb 17 12:02:26.121: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 17 12:02:27.119: INFO: Number of nodes with available pods: 0 Feb 17 12:02:27.120: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 17 12:02:28.124: INFO: Number of nodes with available pods: 0 Feb 17 12:02:28.124: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 17 12:02:29.150: INFO: Number of nodes with available pods: 0 Feb 17 12:02:29.150: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 17 12:02:30.122: INFO: Number of nodes with available pods: 0 Feb 17 12:02:30.122: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 17 12:02:31.142: INFO: Number of nodes with available pods: 0 Feb 17 12:02:31.142: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 17 12:02:32.122: INFO: Number of nodes with available pods: 0 Feb 17 12:02:32.122: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 17 12:02:33.115: INFO: Number of nodes with available pods: 0 Feb 17 12:02:33.116: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 17 12:02:34.121: INFO: Number of nodes with available pods: 0 Feb 17 12:02:34.121: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 17 12:02:35.125: INFO: Number of nodes with available pods: 0 Feb 17 12:02:35.125: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 17 12:02:36.118: INFO: Number of nodes with available pods: 0 Feb 17 12:02:36.118: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 17 12:02:37.241: INFO: Number of nodes with available pods: 0 Feb 17 12:02:37.242: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 17 12:02:38.119: INFO: Number of nodes with available pods: 0 Feb 17 12:02:38.119: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 17 12:02:39.118: INFO: Number of nodes with available pods: 0 Feb 17 12:02:39.118: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 17 12:02:40.157: INFO: Number of nodes with available pods: 1 Feb 17 12:02:40.157: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-4ptcl, will wait for the garbage collector to delete the pods Feb 17 12:02:40.348: INFO: Deleting DaemonSet.extensions daemon-set took: 75.519941ms Feb 17 12:02:40.449: INFO: Terminating DaemonSet.extensions daemon-set pods took: 101.183861ms Feb 17 12:02:52.667: INFO: Number of nodes with available pods: 0 Feb 17 12:02:52.667: INFO: Number of running nodes: 0, number of available pods: 0 Feb 17 12:02:52.673: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-4ptcl/daemonsets","resourceVersion":"21975489"},"items":null} Feb 17 12:02:52.676: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-4ptcl/pods","resourceVersion":"21975489"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 17 12:02:52.703: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-daemonsets-4ptcl" for this suite. Feb 17 12:02:58.900: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 17 12:02:59.031: INFO: namespace: e2e-tests-daemonsets-4ptcl, resource: bindings, ignored listing per whitelist Feb 17 12:02:59.132: INFO: namespace e2e-tests-daemonsets-4ptcl deletion completed in 6.300236102s • [SLOW TEST:47.642 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 17 12:02:59.132: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test use defaults Feb 17 12:02:59.338: INFO: Waiting up to 5m0s for pod "client-containers-70c890a4-517d-11ea-a180-0242ac110008" in namespace "e2e-tests-containers-tjhzs" to be "success or failure" Feb 17 12:02:59.468: INFO: Pod "client-containers-70c890a4-517d-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 129.780566ms Feb 17 12:03:01.605: INFO: Pod "client-containers-70c890a4-517d-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.266215603s Feb 17 12:03:03.621: INFO: Pod "client-containers-70c890a4-517d-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.282461263s Feb 17 12:03:05.696: INFO: Pod "client-containers-70c890a4-517d-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.357522669s Feb 17 12:03:07.961: INFO: Pod "client-containers-70c890a4-517d-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.622149583s Feb 17 12:03:09.972: INFO: Pod "client-containers-70c890a4-517d-11ea-a180-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.633596229s STEP: Saw pod success Feb 17 12:03:09.972: INFO: Pod "client-containers-70c890a4-517d-11ea-a180-0242ac110008" satisfied condition "success or failure" Feb 17 12:03:09.978: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod client-containers-70c890a4-517d-11ea-a180-0242ac110008 container test-container: STEP: delete the pod Feb 17 12:03:10.397: INFO: Waiting for pod client-containers-70c890a4-517d-11ea-a180-0242ac110008 to disappear Feb 17 12:03:10.406: INFO: Pod client-containers-70c890a4-517d-11ea-a180-0242ac110008 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 17 12:03:10.406: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-containers-tjhzs" for this suite. Feb 17 12:03:16.457: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 17 12:03:16.559: INFO: namespace: e2e-tests-containers-tjhzs, resource: bindings, ignored listing per whitelist Feb 17 12:03:16.644: INFO: namespace e2e-tests-containers-tjhzs deletion completed in 6.229482722s • [SLOW TEST:17.512 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 17 12:03:16.645: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-7b4da5ca-517d-11ea-a180-0242ac110008 STEP: Creating a pod to test consume secrets Feb 17 12:03:17.000: INFO: Waiting up to 5m0s for pod "pod-secrets-7b4ede1d-517d-11ea-a180-0242ac110008" in namespace "e2e-tests-secrets-zjpxp" to be "success or failure" Feb 17 12:03:17.046: INFO: Pod "pod-secrets-7b4ede1d-517d-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 44.990796ms Feb 17 12:03:20.451: INFO: Pod "pod-secrets-7b4ede1d-517d-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 3.450648953s Feb 17 12:03:22.472: INFO: Pod "pod-secrets-7b4ede1d-517d-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 5.471484452s Feb 17 12:03:24.532: INFO: Pod "pod-secrets-7b4ede1d-517d-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 7.531492132s Feb 17 12:03:26.572: INFO: Pod "pod-secrets-7b4ede1d-517d-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 9.571517748s Feb 17 12:03:28.613: INFO: Pod "pod-secrets-7b4ede1d-517d-11ea-a180-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.612811088s STEP: Saw pod success Feb 17 12:03:28.614: INFO: Pod "pod-secrets-7b4ede1d-517d-11ea-a180-0242ac110008" satisfied condition "success or failure" Feb 17 12:03:28.634: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-7b4ede1d-517d-11ea-a180-0242ac110008 container secret-volume-test: STEP: delete the pod Feb 17 12:03:28.771: INFO: Waiting for pod pod-secrets-7b4ede1d-517d-11ea-a180-0242ac110008 to disappear Feb 17 12:03:28.972: INFO: Pod pod-secrets-7b4ede1d-517d-11ea-a180-0242ac110008 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 17 12:03:28.973: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-zjpxp" for this suite. Feb 17 12:03:35.055: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 17 12:03:35.164: INFO: namespace: e2e-tests-secrets-zjpxp, resource: bindings, ignored listing per whitelist Feb 17 12:03:35.386: INFO: namespace e2e-tests-secrets-zjpxp deletion completed in 6.396480867s • [SLOW TEST:18.741 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 17 12:03:35.386: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65 [It] RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Feb 17 12:03:35.719: INFO: Creating deployment "test-recreate-deployment" Feb 17 12:03:35.741: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1 Feb 17 12:03:35.778: INFO: new replicaset for deployment "test-recreate-deployment" is yet to be created Feb 17 12:03:37.889: INFO: Waiting deployment "test-recreate-deployment" to complete Feb 17 12:03:38.307: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717537815, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717537815, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717537815, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717537815, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 17 12:03:40.339: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717537815, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717537815, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717537815, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717537815, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 17 12:03:42.327: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717537815, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717537815, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717537815, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717537815, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 17 12:03:44.323: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717537815, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717537815, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717537815, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717537815, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 17 12:03:46.323: INFO: Triggering a new rollout for deployment "test-recreate-deployment" Feb 17 12:03:46.342: INFO: Updating deployment test-recreate-deployment Feb 17 12:03:46.342: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59 Feb 17 12:03:47.031: INFO: Deployment "test-recreate-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment,GenerateName:,Namespace:e2e-tests-deployment-52kxx,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-52kxx/deployments/test-recreate-deployment,UID:867bccfd-517d-11ea-a994-fa163e34d433,ResourceVersion:21975662,Generation:2,CreationTimestamp:2020-02-17 12:03:35 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[{Available False 2020-02-17 12:03:46 +0000 UTC 2020-02-17 12:03:46 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2020-02-17 12:03:46 +0000 UTC 2020-02-17 12:03:35 +0000 UTC ReplicaSetUpdated ReplicaSet "test-recreate-deployment-589c4bfd" is progressing.}],ReadyReplicas:0,CollisionCount:nil,},} Feb 17 12:03:47.047: INFO: New ReplicaSet "test-recreate-deployment-589c4bfd" of Deployment "test-recreate-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-589c4bfd,GenerateName:,Namespace:e2e-tests-deployment-52kxx,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-52kxx/replicasets/test-recreate-deployment-589c4bfd,UID:8d00641f-517d-11ea-a994-fa163e34d433,ResourceVersion:21975660,Generation:1,CreationTimestamp:2020-02-17 12:03:46 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment 867bccfd-517d-11ea-a994-fa163e34d433 0xc00212263f 0xc002122650}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Feb 17 12:03:47.047: INFO: All old ReplicaSets of Deployment "test-recreate-deployment": Feb 17 12:03:47.047: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5bf7f65dc,GenerateName:,Namespace:e2e-tests-deployment-52kxx,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-52kxx/replicasets/test-recreate-deployment-5bf7f65dc,UID:8684cf68-517d-11ea-a994-fa163e34d433,ResourceVersion:21975652,Generation:2,CreationTimestamp:2020-02-17 12:03:35 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5bf7f65dc,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment 867bccfd-517d-11ea-a994-fa163e34d433 0xc002122710 0xc002122711}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5bf7f65dc,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5bf7f65dc,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Feb 17 12:03:47.109: INFO: Pod "test-recreate-deployment-589c4bfd-v2tjt" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-589c4bfd-v2tjt,GenerateName:test-recreate-deployment-589c4bfd-,Namespace:e2e-tests-deployment-52kxx,SelfLink:/api/v1/namespaces/e2e-tests-deployment-52kxx/pods/test-recreate-deployment-589c4bfd-v2tjt,UID:8d126d7a-517d-11ea-a994-fa163e34d433,ResourceVersion:21975657,Generation:0,CreationTimestamp:2020-02-17 12:03:46 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-recreate-deployment-589c4bfd 8d00641f-517d-11ea-a994-fa163e34d433 0xc00203a44f 0xc00203a460}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-sjc7b {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-sjc7b,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-sjc7b true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00203a4c0} {node.kubernetes.io/unreachable Exists NoExecute 0xc00203a4e0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-17 12:03:46 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 17 12:03:47.109: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-deployment-52kxx" for this suite. Feb 17 12:03:55.807: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 17 12:03:55.921: INFO: namespace: e2e-tests-deployment-52kxx, resource: bindings, ignored listing per whitelist Feb 17 12:03:55.990: INFO: namespace e2e-tests-deployment-52kxx deletion completed in 8.849378721s • [SLOW TEST:20.604 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 17 12:03:55.991: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Feb 17 12:03:56.442: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 17 12:03:57.588: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-custom-resource-definition-w68vm" for this suite. Feb 17 12:04:03.658: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 17 12:04:03.902: INFO: namespace: e2e-tests-custom-resource-definition-w68vm, resource: bindings, ignored listing per whitelist Feb 17 12:04:03.938: INFO: namespace e2e-tests-custom-resource-definition-w68vm deletion completed in 6.33691495s • [SLOW TEST:7.947 seconds] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 Simple CustomResourceDefinition /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:35 creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 17 12:04:03.938: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43 [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod Feb 17 12:04:04.186: INFO: PodSpec: initContainers in spec.initContainers Feb 17 12:05:14.277: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-9772b0f0-517d-11ea-a180-0242ac110008", GenerateName:"", Namespace:"e2e-tests-init-container-fc579", SelfLink:"/api/v1/namespaces/e2e-tests-init-container-fc579/pods/pod-init-9772b0f0-517d-11ea-a180-0242ac110008", UID:"97752806-517d-11ea-a994-fa163e34d433", ResourceVersion:"21975840", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63717537844, loc:(*time.Location)(0x7950ac0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"185994616"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-j56f5", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc0026ce000), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-j56f5", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-j56f5", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}, "cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}, Requests:v1.ResourceList{"memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}, "cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-j56f5", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc0024060e8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"hunter-server-hu5at5svl7ps", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc00289c1e0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc002406300)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc002406460)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc002406468), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc00240646c)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717537844, loc:(*time.Location)(0x7950ac0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717537844, loc:(*time.Location)(0x7950ac0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717537844, loc:(*time.Location)(0x7950ac0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717537844, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"10.96.1.240", PodIP:"10.32.0.5", StartTime:(*v1.Time)(0xc002670040), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc002606070)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc0026060e0)}, Ready:false, RestartCount:3, Image:"busybox:1.29", ImageID:"docker-pullable://busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"docker://eae5b4805fbcf4b3f455305805b4788afb1110c29589e8030ff704e0fc2e0666"}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc002670080), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:""}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc002670060), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:""}}, QOSClass:"Guaranteed"}} [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 17 12:05:14.280: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-init-container-fc579" for this suite. Feb 17 12:05:32.489: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 17 12:05:32.775: INFO: namespace: e2e-tests-init-container-fc579, resource: bindings, ignored listing per whitelist Feb 17 12:05:32.835: INFO: namespace e2e-tests-init-container-fc579 deletion completed in 18.397899437s • [SLOW TEST:88.897 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 17 12:05:32.836: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-map-cc6bfc29-517d-11ea-a180-0242ac110008 STEP: Creating a pod to test consume secrets Feb 17 12:05:33.084: INFO: Waiting up to 5m0s for pod "pod-secrets-cc6d2f2b-517d-11ea-a180-0242ac110008" in namespace "e2e-tests-secrets-jdvn6" to be "success or failure" Feb 17 12:05:33.098: INFO: Pod "pod-secrets-cc6d2f2b-517d-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 13.595345ms Feb 17 12:05:35.322: INFO: Pod "pod-secrets-cc6d2f2b-517d-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.237861905s Feb 17 12:05:37.352: INFO: Pod "pod-secrets-cc6d2f2b-517d-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.267934425s Feb 17 12:05:39.457: INFO: Pod "pod-secrets-cc6d2f2b-517d-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.373106312s Feb 17 12:05:41.471: INFO: Pod "pod-secrets-cc6d2f2b-517d-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.386969164s Feb 17 12:05:43.484: INFO: Pod "pod-secrets-cc6d2f2b-517d-11ea-a180-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.399722737s STEP: Saw pod success Feb 17 12:05:43.484: INFO: Pod "pod-secrets-cc6d2f2b-517d-11ea-a180-0242ac110008" satisfied condition "success or failure" Feb 17 12:05:43.489: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-cc6d2f2b-517d-11ea-a180-0242ac110008 container secret-volume-test: STEP: delete the pod Feb 17 12:05:44.215: INFO: Waiting for pod pod-secrets-cc6d2f2b-517d-11ea-a180-0242ac110008 to disappear Feb 17 12:05:44.237: INFO: Pod pod-secrets-cc6d2f2b-517d-11ea-a180-0242ac110008 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 17 12:05:44.237: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-jdvn6" for this suite. Feb 17 12:05:50.423: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 17 12:05:50.618: INFO: namespace: e2e-tests-secrets-jdvn6, resource: bindings, ignored listing per whitelist Feb 17 12:05:50.643: INFO: namespace e2e-tests-secrets-jdvn6 deletion completed in 6.392994951s • [SLOW TEST:17.807 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 17 12:05:50.643: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:295 [It] should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a replication controller Feb 17 12:05:50.817: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-trw2s' Feb 17 12:05:51.320: INFO: stderr: "" Feb 17 12:05:51.320: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Feb 17 12:05:51.320: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-trw2s' Feb 17 12:05:51.435: INFO: stderr: "" Feb 17 12:05:51.436: INFO: stdout: "update-demo-nautilus-9t8tv " STEP: Replicas for name=update-demo: expected=2 actual=1 Feb 17 12:05:56.437: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-trw2s' Feb 17 12:05:56.639: INFO: stderr: "" Feb 17 12:05:56.639: INFO: stdout: "update-demo-nautilus-9t8tv update-demo-nautilus-wvhz4 " Feb 17 12:05:56.640: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9t8tv -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-trw2s' Feb 17 12:05:56.904: INFO: stderr: "" Feb 17 12:05:56.905: INFO: stdout: "" Feb 17 12:05:56.905: INFO: update-demo-nautilus-9t8tv is created but not running Feb 17 12:06:01.905: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-trw2s' Feb 17 12:06:02.019: INFO: stderr: "" Feb 17 12:06:02.019: INFO: stdout: "update-demo-nautilus-9t8tv update-demo-nautilus-wvhz4 " Feb 17 12:06:02.019: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9t8tv -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-trw2s' Feb 17 12:06:02.208: INFO: stderr: "" Feb 17 12:06:02.208: INFO: stdout: "" Feb 17 12:06:02.208: INFO: update-demo-nautilus-9t8tv is created but not running Feb 17 12:06:07.209: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-trw2s' Feb 17 12:06:07.424: INFO: stderr: "" Feb 17 12:06:07.424: INFO: stdout: "update-demo-nautilus-9t8tv update-demo-nautilus-wvhz4 " Feb 17 12:06:07.425: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9t8tv -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-trw2s' Feb 17 12:06:07.583: INFO: stderr: "" Feb 17 12:06:07.583: INFO: stdout: "true" Feb 17 12:06:07.583: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9t8tv -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-trw2s' Feb 17 12:06:07.680: INFO: stderr: "" Feb 17 12:06:07.680: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Feb 17 12:06:07.680: INFO: validating pod update-demo-nautilus-9t8tv Feb 17 12:06:07.759: INFO: got data: { "image": "nautilus.jpg" } Feb 17 12:06:07.759: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Feb 17 12:06:07.759: INFO: update-demo-nautilus-9t8tv is verified up and running Feb 17 12:06:07.760: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-wvhz4 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-trw2s' Feb 17 12:06:07.923: INFO: stderr: "" Feb 17 12:06:07.923: INFO: stdout: "true" Feb 17 12:06:07.923: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-wvhz4 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-trw2s' Feb 17 12:06:08.115: INFO: stderr: "" Feb 17 12:06:08.115: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Feb 17 12:06:08.115: INFO: validating pod update-demo-nautilus-wvhz4 Feb 17 12:06:08.134: INFO: got data: { "image": "nautilus.jpg" } Feb 17 12:06:08.134: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Feb 17 12:06:08.134: INFO: update-demo-nautilus-wvhz4 is verified up and running STEP: scaling down the replication controller Feb 17 12:06:08.137: INFO: scanned /root for discovery docs: Feb 17 12:06:08.137: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=e2e-tests-kubectl-trw2s' Feb 17 12:06:09.380: INFO: stderr: "" Feb 17 12:06:09.380: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Feb 17 12:06:09.381: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-trw2s' Feb 17 12:06:09.556: INFO: stderr: "" Feb 17 12:06:09.557: INFO: stdout: "update-demo-nautilus-9t8tv update-demo-nautilus-wvhz4 " STEP: Replicas for name=update-demo: expected=1 actual=2 Feb 17 12:06:14.558: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-trw2s' Feb 17 12:06:14.704: INFO: stderr: "" Feb 17 12:06:14.704: INFO: stdout: "update-demo-nautilus-9t8tv update-demo-nautilus-wvhz4 " STEP: Replicas for name=update-demo: expected=1 actual=2 Feb 17 12:06:19.705: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-trw2s' Feb 17 12:06:19.981: INFO: stderr: "" Feb 17 12:06:19.981: INFO: stdout: "update-demo-nautilus-9t8tv update-demo-nautilus-wvhz4 " STEP: Replicas for name=update-demo: expected=1 actual=2 Feb 17 12:06:24.982: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-trw2s' Feb 17 12:06:25.169: INFO: stderr: "" Feb 17 12:06:25.170: INFO: stdout: "update-demo-nautilus-wvhz4 " Feb 17 12:06:25.175: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-wvhz4 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-trw2s' Feb 17 12:06:25.317: INFO: stderr: "" Feb 17 12:06:25.317: INFO: stdout: "true" Feb 17 12:06:25.318: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-wvhz4 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-trw2s' Feb 17 12:06:25.441: INFO: stderr: "" Feb 17 12:06:25.442: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Feb 17 12:06:25.442: INFO: validating pod update-demo-nautilus-wvhz4 Feb 17 12:06:25.481: INFO: got data: { "image": "nautilus.jpg" } Feb 17 12:06:25.481: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Feb 17 12:06:25.481: INFO: update-demo-nautilus-wvhz4 is verified up and running STEP: scaling up the replication controller Feb 17 12:06:25.486: INFO: scanned /root for discovery docs: Feb 17 12:06:25.486: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=e2e-tests-kubectl-trw2s' Feb 17 12:06:26.710: INFO: stderr: "" Feb 17 12:06:26.710: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Feb 17 12:06:26.710: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-trw2s' Feb 17 12:06:26.831: INFO: stderr: "" Feb 17 12:06:26.831: INFO: stdout: "update-demo-nautilus-w92pf update-demo-nautilus-wvhz4 " Feb 17 12:06:26.832: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-w92pf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-trw2s' Feb 17 12:06:26.967: INFO: stderr: "" Feb 17 12:06:26.967: INFO: stdout: "" Feb 17 12:06:26.967: INFO: update-demo-nautilus-w92pf is created but not running Feb 17 12:06:31.968: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-trw2s' Feb 17 12:06:32.544: INFO: stderr: "" Feb 17 12:06:32.544: INFO: stdout: "update-demo-nautilus-w92pf update-demo-nautilus-wvhz4 " Feb 17 12:06:32.545: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-w92pf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-trw2s' Feb 17 12:06:35.018: INFO: stderr: "" Feb 17 12:06:35.018: INFO: stdout: "" Feb 17 12:06:35.018: INFO: update-demo-nautilus-w92pf is created but not running Feb 17 12:06:40.019: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-trw2s' Feb 17 12:06:40.329: INFO: stderr: "" Feb 17 12:06:40.329: INFO: stdout: "update-demo-nautilus-w92pf update-demo-nautilus-wvhz4 " Feb 17 12:06:40.330: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-w92pf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-trw2s' Feb 17 12:06:40.471: INFO: stderr: "" Feb 17 12:06:40.472: INFO: stdout: "true" Feb 17 12:06:40.472: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-w92pf -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-trw2s' Feb 17 12:06:40.584: INFO: stderr: "" Feb 17 12:06:40.585: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Feb 17 12:06:40.585: INFO: validating pod update-demo-nautilus-w92pf Feb 17 12:06:40.599: INFO: got data: { "image": "nautilus.jpg" } Feb 17 12:06:40.599: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Feb 17 12:06:40.599: INFO: update-demo-nautilus-w92pf is verified up and running Feb 17 12:06:40.599: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-wvhz4 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-trw2s' Feb 17 12:06:40.706: INFO: stderr: "" Feb 17 12:06:40.706: INFO: stdout: "true" Feb 17 12:06:40.706: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-wvhz4 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-trw2s' Feb 17 12:06:40.874: INFO: stderr: "" Feb 17 12:06:40.874: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Feb 17 12:06:40.874: INFO: validating pod update-demo-nautilus-wvhz4 Feb 17 12:06:40.885: INFO: got data: { "image": "nautilus.jpg" } Feb 17 12:06:40.885: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Feb 17 12:06:40.885: INFO: update-demo-nautilus-wvhz4 is verified up and running STEP: using delete to clean up resources Feb 17 12:06:40.886: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-trw2s' Feb 17 12:06:41.026: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Feb 17 12:06:41.027: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Feb 17 12:06:41.027: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=e2e-tests-kubectl-trw2s' Feb 17 12:06:41.222: INFO: stderr: "No resources found.\n" Feb 17 12:06:41.223: INFO: stdout: "" Feb 17 12:06:41.224: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=e2e-tests-kubectl-trw2s -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Feb 17 12:06:41.478: INFO: stderr: "" Feb 17 12:06:41.479: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 17 12:06:41.479: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-trw2s" for this suite. Feb 17 12:07:05.541: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 17 12:07:05.645: INFO: namespace: e2e-tests-kubectl-trw2s, resource: bindings, ignored listing per whitelist Feb 17 12:07:05.731: INFO: namespace e2e-tests-kubectl-trw2s deletion completed in 24.229537083s • [SLOW TEST:75.088 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 17 12:07:05.732: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:204 [It] should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying QOS class is set on the pod [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 17 12:07:06.087: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-htds5" for this suite. Feb 17 12:07:28.149: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 17 12:07:28.350: INFO: namespace: e2e-tests-pods-htds5, resource: bindings, ignored listing per whitelist Feb 17 12:07:28.363: INFO: namespace e2e-tests-pods-htds5 deletion completed in 22.252964111s • [SLOW TEST:22.631 seconds] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-network] Services should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 17 12:07:28.363: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:85 [It] should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 17 12:07:28.646: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-services-8tzpm" for this suite. Feb 17 12:07:34.682: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 17 12:07:34.709: INFO: namespace: e2e-tests-services-8tzpm, resource: bindings, ignored listing per whitelist Feb 17 12:07:34.778: INFO: namespace e2e-tests-services-8tzpm deletion completed in 6.125726136s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:90 • [SLOW TEST:6.415 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 17 12:07:34.779: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:295 [It] should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the initial replication controller Feb 17 12:07:34.903: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-zst75' Feb 17 12:07:35.375: INFO: stderr: "" Feb 17 12:07:35.375: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Feb 17 12:07:35.376: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-zst75' Feb 17 12:07:35.532: INFO: stderr: "" Feb 17 12:07:35.533: INFO: stdout: "update-demo-nautilus-clzjw update-demo-nautilus-xrwqg " Feb 17 12:07:35.533: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-clzjw -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-zst75' Feb 17 12:07:35.754: INFO: stderr: "" Feb 17 12:07:35.755: INFO: stdout: "" Feb 17 12:07:35.755: INFO: update-demo-nautilus-clzjw is created but not running Feb 17 12:07:40.755: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-zst75' Feb 17 12:07:40.869: INFO: stderr: "" Feb 17 12:07:40.870: INFO: stdout: "update-demo-nautilus-clzjw update-demo-nautilus-xrwqg " Feb 17 12:07:40.870: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-clzjw -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-zst75' Feb 17 12:07:40.960: INFO: stderr: "" Feb 17 12:07:40.960: INFO: stdout: "" Feb 17 12:07:40.960: INFO: update-demo-nautilus-clzjw is created but not running Feb 17 12:07:45.960: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-zst75' Feb 17 12:07:46.155: INFO: stderr: "" Feb 17 12:07:46.156: INFO: stdout: "update-demo-nautilus-clzjw update-demo-nautilus-xrwqg " Feb 17 12:07:46.156: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-clzjw -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-zst75' Feb 17 12:07:46.293: INFO: stderr: "" Feb 17 12:07:46.293: INFO: stdout: "" Feb 17 12:07:46.293: INFO: update-demo-nautilus-clzjw is created but not running Feb 17 12:07:51.294: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-zst75' Feb 17 12:07:51.478: INFO: stderr: "" Feb 17 12:07:51.478: INFO: stdout: "update-demo-nautilus-clzjw update-demo-nautilus-xrwqg " Feb 17 12:07:51.478: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-clzjw -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-zst75' Feb 17 12:07:51.620: INFO: stderr: "" Feb 17 12:07:51.620: INFO: stdout: "true" Feb 17 12:07:51.620: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-clzjw -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-zst75' Feb 17 12:07:51.709: INFO: stderr: "" Feb 17 12:07:51.709: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Feb 17 12:07:51.709: INFO: validating pod update-demo-nautilus-clzjw Feb 17 12:07:51.721: INFO: got data: { "image": "nautilus.jpg" } Feb 17 12:07:51.721: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Feb 17 12:07:51.722: INFO: update-demo-nautilus-clzjw is verified up and running Feb 17 12:07:51.722: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-xrwqg -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-zst75' Feb 17 12:07:51.837: INFO: stderr: "" Feb 17 12:07:51.837: INFO: stdout: "true" Feb 17 12:07:51.838: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-xrwqg -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-zst75' Feb 17 12:07:51.958: INFO: stderr: "" Feb 17 12:07:51.958: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Feb 17 12:07:51.958: INFO: validating pod update-demo-nautilus-xrwqg Feb 17 12:07:51.972: INFO: got data: { "image": "nautilus.jpg" } Feb 17 12:07:51.972: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Feb 17 12:07:51.972: INFO: update-demo-nautilus-xrwqg is verified up and running STEP: rolling-update to new replication controller Feb 17 12:07:51.974: INFO: scanned /root for discovery docs: Feb 17 12:07:51.974: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=e2e-tests-kubectl-zst75' Feb 17 12:08:32.270: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" Feb 17 12:08:32.271: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n" STEP: waiting for all containers in name=update-demo pods to come up. Feb 17 12:08:32.271: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-zst75' Feb 17 12:08:32.438: INFO: stderr: "" Feb 17 12:08:32.438: INFO: stdout: "update-demo-kitten-dsx8r update-demo-kitten-sj6gf " Feb 17 12:08:32.438: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-dsx8r -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-zst75' Feb 17 12:08:32.628: INFO: stderr: "" Feb 17 12:08:32.628: INFO: stdout: "true" Feb 17 12:08:32.628: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-dsx8r -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-zst75' Feb 17 12:08:32.840: INFO: stderr: "" Feb 17 12:08:32.840: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" Feb 17 12:08:32.840: INFO: validating pod update-demo-kitten-dsx8r Feb 17 12:08:32.874: INFO: got data: { "image": "kitten.jpg" } Feb 17 12:08:32.874: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . Feb 17 12:08:32.874: INFO: update-demo-kitten-dsx8r is verified up and running Feb 17 12:08:32.874: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-sj6gf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-zst75' Feb 17 12:08:32.957: INFO: stderr: "" Feb 17 12:08:32.958: INFO: stdout: "true" Feb 17 12:08:32.958: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-sj6gf -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-zst75' Feb 17 12:08:33.052: INFO: stderr: "" Feb 17 12:08:33.052: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" Feb 17 12:08:33.052: INFO: validating pod update-demo-kitten-sj6gf Feb 17 12:08:33.059: INFO: got data: { "image": "kitten.jpg" } Feb 17 12:08:33.059: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . Feb 17 12:08:33.059: INFO: update-demo-kitten-sj6gf is verified up and running [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 17 12:08:33.060: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-zst75" for this suite. Feb 17 12:08:59.089: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 17 12:08:59.168: INFO: namespace: e2e-tests-kubectl-zst75, resource: bindings, ignored listing per whitelist Feb 17 12:08:59.225: INFO: namespace e2e-tests-kubectl-zst75 deletion completed in 26.160902823s • [SLOW TEST:84.446 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 17 12:08:59.225: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74 STEP: Creating service test in namespace e2e-tests-statefulset-82vv5 [It] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Looking for a node to schedule stateful set and pod STEP: Creating pod with conflicting port in namespace e2e-tests-statefulset-82vv5 STEP: Creating statefulset with conflicting port in namespace e2e-tests-statefulset-82vv5 STEP: Waiting until pod test-pod will start running in namespace e2e-tests-statefulset-82vv5 STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace e2e-tests-statefulset-82vv5 Feb 17 12:09:13.079: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-82vv5, name: ss-0, uid: 4f74eb98-517e-11ea-a994-fa163e34d433, status phase: Pending. Waiting for statefulset controller to delete. Feb 17 12:09:13.116: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-82vv5, name: ss-0, uid: 4f74eb98-517e-11ea-a994-fa163e34d433, status phase: Failed. Waiting for statefulset controller to delete. Feb 17 12:09:13.284: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-82vv5, name: ss-0, uid: 4f74eb98-517e-11ea-a994-fa163e34d433, status phase: Failed. Waiting for statefulset controller to delete. Feb 17 12:09:13.336: INFO: Observed delete event for stateful pod ss-0 in namespace e2e-tests-statefulset-82vv5 STEP: Removing pod with conflicting port in namespace e2e-tests-statefulset-82vv5 STEP: Waiting when stateful pod ss-0 will be recreated in namespace e2e-tests-statefulset-82vv5 and will be in running state [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85 Feb 17 12:09:28.290: INFO: Deleting all statefulset in ns e2e-tests-statefulset-82vv5 Feb 17 12:09:28.295: INFO: Scaling statefulset ss to 0 Feb 17 12:09:48.395: INFO: Waiting for statefulset status.replicas updated to 0 Feb 17 12:09:48.406: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 17 12:09:48.515: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-statefulset-82vv5" for this suite. Feb 17 12:09:56.728: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 17 12:09:56.861: INFO: namespace: e2e-tests-statefulset-82vv5, resource: bindings, ignored listing per whitelist Feb 17 12:09:56.995: INFO: namespace e2e-tests-statefulset-82vv5 deletion completed in 8.314361761s • [SLOW TEST:57.770 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 17 12:09:56.996: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-upd-6a007437-517e-11ea-a180-0242ac110008 STEP: Creating the pod STEP: Waiting for pod with text data STEP: Waiting for pod with binary data [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 17 12:10:11.687: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-ktttv" for this suite. Feb 17 12:10:31.723: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 17 12:10:31.874: INFO: namespace: e2e-tests-configmap-ktttv, resource: bindings, ignored listing per whitelist Feb 17 12:10:31.951: INFO: namespace e2e-tests-configmap-ktttv deletion completed in 20.259265546s • [SLOW TEST:34.955 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 17 12:10:31.951: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Feb 17 12:10:32.241: INFO: Creating ReplicaSet my-hostname-basic-7ebf54dd-517e-11ea-a180-0242ac110008 Feb 17 12:10:32.526: INFO: Pod name my-hostname-basic-7ebf54dd-517e-11ea-a180-0242ac110008: Found 0 pods out of 1 Feb 17 12:10:38.614: INFO: Pod name my-hostname-basic-7ebf54dd-517e-11ea-a180-0242ac110008: Found 1 pods out of 1 Feb 17 12:10:38.614: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-7ebf54dd-517e-11ea-a180-0242ac110008" is running Feb 17 12:10:42.650: INFO: Pod "my-hostname-basic-7ebf54dd-517e-11ea-a180-0242ac110008-bj9cb" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-17 12:10:32 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-17 12:10:32 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-7ebf54dd-517e-11ea-a180-0242ac110008]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-17 12:10:32 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-7ebf54dd-517e-11ea-a180-0242ac110008]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-17 12:10:32 +0000 UTC Reason: Message:}]) Feb 17 12:10:42.651: INFO: Trying to dial the pod Feb 17 12:10:47.691: INFO: Controller my-hostname-basic-7ebf54dd-517e-11ea-a180-0242ac110008: Got expected result from replica 1 [my-hostname-basic-7ebf54dd-517e-11ea-a180-0242ac110008-bj9cb]: "my-hostname-basic-7ebf54dd-517e-11ea-a180-0242ac110008-bj9cb", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 17 12:10:47.691: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-replicaset-hmpkq" for this suite. Feb 17 12:10:56.314: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 17 12:10:57.041: INFO: namespace: e2e-tests-replicaset-hmpkq, resource: bindings, ignored listing per whitelist Feb 17 12:10:57.182: INFO: namespace e2e-tests-replicaset-hmpkq deletion completed in 9.48220189s • [SLOW TEST:25.231 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 17 12:10:57.183: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:295 [It] should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a replication controller Feb 17 12:10:57.486: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-v4fjx' Feb 17 12:10:57.923: INFO: stderr: "" Feb 17 12:10:57.924: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Feb 17 12:10:57.924: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-v4fjx' Feb 17 12:10:58.162: INFO: stderr: "" Feb 17 12:10:58.162: INFO: stdout: "" STEP: Replicas for name=update-demo: expected=2 actual=0 Feb 17 12:11:03.163: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-v4fjx' Feb 17 12:11:03.275: INFO: stderr: "" Feb 17 12:11:03.276: INFO: stdout: "update-demo-nautilus-q46sm update-demo-nautilus-zjqz8 " Feb 17 12:11:03.276: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-q46sm -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-v4fjx' Feb 17 12:11:03.373: INFO: stderr: "" Feb 17 12:11:03.373: INFO: stdout: "" Feb 17 12:11:03.373: INFO: update-demo-nautilus-q46sm is created but not running Feb 17 12:11:08.374: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-v4fjx' Feb 17 12:11:08.907: INFO: stderr: "" Feb 17 12:11:08.907: INFO: stdout: "update-demo-nautilus-q46sm update-demo-nautilus-zjqz8 " Feb 17 12:11:08.908: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-q46sm -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-v4fjx' Feb 17 12:11:09.108: INFO: stderr: "" Feb 17 12:11:09.108: INFO: stdout: "" Feb 17 12:11:09.108: INFO: update-demo-nautilus-q46sm is created but not running Feb 17 12:11:14.109: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-v4fjx' Feb 17 12:11:14.237: INFO: stderr: "" Feb 17 12:11:14.237: INFO: stdout: "update-demo-nautilus-q46sm update-demo-nautilus-zjqz8 " Feb 17 12:11:14.237: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-q46sm -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-v4fjx' Feb 17 12:11:14.354: INFO: stderr: "" Feb 17 12:11:14.355: INFO: stdout: "true" Feb 17 12:11:14.355: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-q46sm -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-v4fjx' Feb 17 12:11:14.438: INFO: stderr: "" Feb 17 12:11:14.438: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Feb 17 12:11:14.438: INFO: validating pod update-demo-nautilus-q46sm Feb 17 12:11:14.517: INFO: got data: { "image": "nautilus.jpg" } Feb 17 12:11:14.517: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Feb 17 12:11:14.517: INFO: update-demo-nautilus-q46sm is verified up and running Feb 17 12:11:14.518: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-zjqz8 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-v4fjx' Feb 17 12:11:14.671: INFO: stderr: "" Feb 17 12:11:14.672: INFO: stdout: "true" Feb 17 12:11:14.672: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-zjqz8 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-v4fjx' Feb 17 12:11:14.800: INFO: stderr: "" Feb 17 12:11:14.800: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Feb 17 12:11:14.800: INFO: validating pod update-demo-nautilus-zjqz8 Feb 17 12:11:14.828: INFO: got data: { "image": "nautilus.jpg" } Feb 17 12:11:14.828: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Feb 17 12:11:14.829: INFO: update-demo-nautilus-zjqz8 is verified up and running STEP: using delete to clean up resources Feb 17 12:11:14.829: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-v4fjx' Feb 17 12:11:14.979: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Feb 17 12:11:14.979: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Feb 17 12:11:14.979: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=e2e-tests-kubectl-v4fjx' Feb 17 12:11:15.132: INFO: stderr: "No resources found.\n" Feb 17 12:11:15.133: INFO: stdout: "" Feb 17 12:11:15.133: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=e2e-tests-kubectl-v4fjx -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Feb 17 12:11:15.285: INFO: stderr: "" Feb 17 12:11:15.285: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 17 12:11:15.285: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-v4fjx" for this suite. Feb 17 12:11:39.345: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 17 12:11:39.475: INFO: namespace: e2e-tests-kubectl-v4fjx, resource: bindings, ignored listing per whitelist Feb 17 12:11:39.499: INFO: namespace e2e-tests-kubectl-v4fjx deletion completed in 24.197977725s • [SLOW TEST:42.316 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 17 12:11:39.499: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-a6f393bb-517e-11ea-a180-0242ac110008 STEP: Creating a pod to test consume secrets Feb 17 12:11:39.711: INFO: Waiting up to 5m0s for pod "pod-secrets-a6f4e2bb-517e-11ea-a180-0242ac110008" in namespace "e2e-tests-secrets-q2xgs" to be "success or failure" Feb 17 12:11:39.719: INFO: Pod "pod-secrets-a6f4e2bb-517e-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 7.68247ms Feb 17 12:11:41.909: INFO: Pod "pod-secrets-a6f4e2bb-517e-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.197553123s Feb 17 12:11:43.927: INFO: Pod "pod-secrets-a6f4e2bb-517e-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.215325531s Feb 17 12:11:45.946: INFO: Pod "pod-secrets-a6f4e2bb-517e-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.234287596s Feb 17 12:11:47.968: INFO: Pod "pod-secrets-a6f4e2bb-517e-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.257222602s Feb 17 12:11:49.989: INFO: Pod "pod-secrets-a6f4e2bb-517e-11ea-a180-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.27790402s STEP: Saw pod success Feb 17 12:11:49.989: INFO: Pod "pod-secrets-a6f4e2bb-517e-11ea-a180-0242ac110008" satisfied condition "success or failure" Feb 17 12:11:49.995: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-a6f4e2bb-517e-11ea-a180-0242ac110008 container secret-volume-test: STEP: delete the pod Feb 17 12:11:50.065: INFO: Waiting for pod pod-secrets-a6f4e2bb-517e-11ea-a180-0242ac110008 to disappear Feb 17 12:11:50.143: INFO: Pod pod-secrets-a6f4e2bb-517e-11ea-a180-0242ac110008 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 17 12:11:50.144: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-q2xgs" for this suite. Feb 17 12:11:58.339: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 17 12:11:58.543: INFO: namespace: e2e-tests-secrets-q2xgs, resource: bindings, ignored listing per whitelist Feb 17 12:11:58.627: INFO: namespace e2e-tests-secrets-q2xgs deletion completed in 8.441460263s • [SLOW TEST:19.127 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 17 12:11:58.628: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Feb 17 12:12:25.079: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Feb 17 12:12:25.236: INFO: Pod pod-with-prestop-http-hook still exists Feb 17 12:12:27.237: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Feb 17 12:12:27.263: INFO: Pod pod-with-prestop-http-hook still exists Feb 17 12:12:29.237: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Feb 17 12:12:29.350: INFO: Pod pod-with-prestop-http-hook still exists Feb 17 12:12:31.237: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Feb 17 12:12:31.253: INFO: Pod pod-with-prestop-http-hook still exists Feb 17 12:12:33.237: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Feb 17 12:12:33.291: INFO: Pod pod-with-prestop-http-hook still exists Feb 17 12:12:35.237: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Feb 17 12:12:35.261: INFO: Pod pod-with-prestop-http-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 17 12:12:35.287: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-tmmkg" for this suite. Feb 17 12:12:59.327: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 17 12:12:59.409: INFO: namespace: e2e-tests-container-lifecycle-hook-tmmkg, resource: bindings, ignored listing per whitelist Feb 17 12:12:59.675: INFO: namespace e2e-tests-container-lifecycle-hook-tmmkg deletion completed in 24.378548495s • [SLOW TEST:61.047 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40 should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 17 12:12:59.675: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0777 on tmpfs Feb 17 12:12:59.863: INFO: Waiting up to 5m0s for pod "pod-d6b977b7-517e-11ea-a180-0242ac110008" in namespace "e2e-tests-emptydir-8p6m2" to be "success or failure" Feb 17 12:12:59.869: INFO: Pod "pod-d6b977b7-517e-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.48981ms Feb 17 12:13:01.910: INFO: Pod "pod-d6b977b7-517e-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.047567194s Feb 17 12:13:03.927: INFO: Pod "pod-d6b977b7-517e-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.064711851s Feb 17 12:13:07.347: INFO: Pod "pod-d6b977b7-517e-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 7.484412081s Feb 17 12:13:09.359: INFO: Pod "pod-d6b977b7-517e-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 9.496506352s Feb 17 12:13:11.577: INFO: Pod "pod-d6b977b7-517e-11ea-a180-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.713833695s STEP: Saw pod success Feb 17 12:13:11.577: INFO: Pod "pod-d6b977b7-517e-11ea-a180-0242ac110008" satisfied condition "success or failure" Feb 17 12:13:11.602: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-d6b977b7-517e-11ea-a180-0242ac110008 container test-container: STEP: delete the pod Feb 17 12:13:12.615: INFO: Waiting for pod pod-d6b977b7-517e-11ea-a180-0242ac110008 to disappear Feb 17 12:13:12.656: INFO: Pod pod-d6b977b7-517e-11ea-a180-0242ac110008 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 17 12:13:12.657: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-8p6m2" for this suite. Feb 17 12:13:20.919: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 17 12:13:21.188: INFO: namespace: e2e-tests-emptydir-8p6m2, resource: bindings, ignored listing per whitelist Feb 17 12:13:21.203: INFO: namespace e2e-tests-emptydir-8p6m2 deletion completed in 8.461792117s • [SLOW TEST:21.528 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run --rm job should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 17 12:13:21.203: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: executing a command with run --rm and attach with stdin Feb 17 12:13:21.409: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=e2e-tests-kubectl-ggrr5 run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed'' Feb 17 12:13:32.819: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\nI0217 12:13:31.250001 4022 log.go:172] (0xc00013a8f0) (0xc000797400) Create stream\nI0217 12:13:31.250169 4022 log.go:172] (0xc00013a8f0) (0xc000797400) Stream added, broadcasting: 1\nI0217 12:13:31.275942 4022 log.go:172] (0xc00013a8f0) Reply frame received for 1\nI0217 12:13:31.275979 4022 log.go:172] (0xc00013a8f0) (0xc0006a2c80) Create stream\nI0217 12:13:31.275986 4022 log.go:172] (0xc00013a8f0) (0xc0006a2c80) Stream added, broadcasting: 3\nI0217 12:13:31.277717 4022 log.go:172] (0xc00013a8f0) Reply frame received for 3\nI0217 12:13:31.277871 4022 log.go:172] (0xc00013a8f0) (0xc000406000) Create stream\nI0217 12:13:31.277886 4022 log.go:172] (0xc00013a8f0) (0xc000406000) Stream added, broadcasting: 5\nI0217 12:13:31.279553 4022 log.go:172] (0xc00013a8f0) Reply frame received for 5\nI0217 12:13:31.279586 4022 log.go:172] (0xc00013a8f0) (0xc0006a2d20) Create stream\nI0217 12:13:31.279603 4022 log.go:172] (0xc00013a8f0) (0xc0006a2d20) Stream added, broadcasting: 7\nI0217 12:13:31.282311 4022 log.go:172] (0xc00013a8f0) Reply frame received for 7\nI0217 12:13:31.282663 4022 log.go:172] (0xc0006a2c80) (3) Writing data frame\nI0217 12:13:31.282898 4022 log.go:172] (0xc0006a2c80) (3) Writing data frame\nI0217 12:13:31.307525 4022 log.go:172] (0xc00013a8f0) Data frame received for 5\nI0217 12:13:31.307540 4022 log.go:172] (0xc000406000) (5) Data frame handling\nI0217 12:13:31.307549 4022 log.go:172] (0xc000406000) (5) Data frame sent\nI0217 12:13:31.313345 4022 log.go:172] (0xc00013a8f0) Data frame received for 5\nI0217 12:13:31.313402 4022 log.go:172] (0xc000406000) (5) Data frame handling\nI0217 12:13:31.313422 4022 log.go:172] (0xc000406000) (5) Data frame sent\nI0217 12:13:32.766428 4022 log.go:172] (0xc00013a8f0) (0xc0006a2c80) Stream removed, broadcasting: 3\nI0217 12:13:32.766543 4022 log.go:172] (0xc00013a8f0) Data frame received for 1\nI0217 12:13:32.766578 4022 log.go:172] (0xc000797400) (1) Data frame handling\nI0217 12:13:32.766594 4022 log.go:172] (0xc000797400) (1) Data frame sent\nI0217 12:13:32.766641 4022 log.go:172] (0xc00013a8f0) (0xc000797400) Stream removed, broadcasting: 1\nI0217 12:13:32.767839 4022 log.go:172] (0xc00013a8f0) (0xc000406000) Stream removed, broadcasting: 5\nI0217 12:13:32.768270 4022 log.go:172] (0xc00013a8f0) (0xc0006a2d20) Stream removed, broadcasting: 7\nI0217 12:13:32.768412 4022 log.go:172] (0xc00013a8f0) (0xc000797400) Stream removed, broadcasting: 1\nI0217 12:13:32.768495 4022 log.go:172] (0xc00013a8f0) (0xc0006a2c80) Stream removed, broadcasting: 3\nI0217 12:13:32.768508 4022 log.go:172] (0xc00013a8f0) (0xc000406000) Stream removed, broadcasting: 5\nI0217 12:13:32.768565 4022 log.go:172] (0xc00013a8f0) (0xc0006a2d20) Stream removed, broadcasting: 7\nI0217 12:13:32.768815 4022 log.go:172] (0xc00013a8f0) Go away received\n" Feb 17 12:13:32.819: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n" STEP: verifying the job e2e-test-rm-busybox-job was deleted [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 17 12:13:34.866: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-ggrr5" for this suite. Feb 17 12:13:43.405: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 17 12:13:43.530: INFO: namespace: e2e-tests-kubectl-ggrr5, resource: bindings, ignored listing per whitelist Feb 17 12:13:43.541: INFO: namespace e2e-tests-kubectl-ggrr5 deletion completed in 8.644504179s • [SLOW TEST:22.338 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run --rm job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 17 12:13:43.542: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-f0ff8c22-517e-11ea-a180-0242ac110008 STEP: Creating a pod to test consume secrets Feb 17 12:13:43.955: INFO: Waiting up to 5m0s for pod "pod-secrets-f101fdd8-517e-11ea-a180-0242ac110008" in namespace "e2e-tests-secrets-kqlws" to be "success or failure" Feb 17 12:13:43.965: INFO: Pod "pod-secrets-f101fdd8-517e-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 9.90259ms Feb 17 12:13:46.428: INFO: Pod "pod-secrets-f101fdd8-517e-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.472882387s Feb 17 12:13:48.437: INFO: Pod "pod-secrets-f101fdd8-517e-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.481331179s Feb 17 12:13:50.450: INFO: Pod "pod-secrets-f101fdd8-517e-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.494599866s Feb 17 12:13:52.612: INFO: Pod "pod-secrets-f101fdd8-517e-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.656593475s Feb 17 12:13:54.629: INFO: Pod "pod-secrets-f101fdd8-517e-11ea-a180-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.673553497s STEP: Saw pod success Feb 17 12:13:54.629: INFO: Pod "pod-secrets-f101fdd8-517e-11ea-a180-0242ac110008" satisfied condition "success or failure" Feb 17 12:13:54.635: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-f101fdd8-517e-11ea-a180-0242ac110008 container secret-volume-test: STEP: delete the pod Feb 17 12:13:54.801: INFO: Waiting for pod pod-secrets-f101fdd8-517e-11ea-a180-0242ac110008 to disappear Feb 17 12:13:54.806: INFO: Pod pod-secrets-f101fdd8-517e-11ea-a180-0242ac110008 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 17 12:13:54.807: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-kqlws" for this suite. Feb 17 12:14:00.883: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 17 12:14:00.939: INFO: namespace: e2e-tests-secrets-kqlws, resource: bindings, ignored listing per whitelist Feb 17 12:14:01.001: INFO: namespace e2e-tests-secrets-kqlws deletion completed in 6.176061935s • [SLOW TEST:17.459 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 17 12:14:01.001: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not cause race condition when used for configmaps [Serial] [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating 50 configmaps STEP: Creating RC which spawns configmap-volume pods Feb 17 12:14:02.498: INFO: Pod name wrapped-volume-race-fbc38bdb-517e-11ea-a180-0242ac110008: Found 0 pods out of 5 Feb 17 12:14:07.520: INFO: Pod name wrapped-volume-race-fbc38bdb-517e-11ea-a180-0242ac110008: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-fbc38bdb-517e-11ea-a180-0242ac110008 in namespace e2e-tests-emptydir-wrapper-nz6c4, will wait for the garbage collector to delete the pods Feb 17 12:15:53.746: INFO: Deleting ReplicationController wrapped-volume-race-fbc38bdb-517e-11ea-a180-0242ac110008 took: 24.853448ms Feb 17 12:15:54.146: INFO: Terminating ReplicationController wrapped-volume-race-fbc38bdb-517e-11ea-a180-0242ac110008 pods took: 400.861963ms STEP: Creating RC which spawns configmap-volume pods Feb 17 12:16:43.256: INFO: Pod name wrapped-volume-race-5bc3c39f-517f-11ea-a180-0242ac110008: Found 0 pods out of 5 Feb 17 12:16:48.281: INFO: Pod name wrapped-volume-race-5bc3c39f-517f-11ea-a180-0242ac110008: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-5bc3c39f-517f-11ea-a180-0242ac110008 in namespace e2e-tests-emptydir-wrapper-nz6c4, will wait for the garbage collector to delete the pods Feb 17 12:18:32.747: INFO: Deleting ReplicationController wrapped-volume-race-5bc3c39f-517f-11ea-a180-0242ac110008 took: 78.252182ms Feb 17 12:18:33.547: INFO: Terminating ReplicationController wrapped-volume-race-5bc3c39f-517f-11ea-a180-0242ac110008 pods took: 800.603609ms STEP: Creating RC which spawns configmap-volume pods Feb 17 12:19:23.338: INFO: Pod name wrapped-volume-race-bb3135f4-517f-11ea-a180-0242ac110008: Found 0 pods out of 5 Feb 17 12:19:28.387: INFO: Pod name wrapped-volume-race-bb3135f4-517f-11ea-a180-0242ac110008: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-bb3135f4-517f-11ea-a180-0242ac110008 in namespace e2e-tests-emptydir-wrapper-nz6c4, will wait for the garbage collector to delete the pods Feb 17 12:21:10.940: INFO: Deleting ReplicationController wrapped-volume-race-bb3135f4-517f-11ea-a180-0242ac110008 took: 114.562826ms Feb 17 12:21:11.241: INFO: Terminating ReplicationController wrapped-volume-race-bb3135f4-517f-11ea-a180-0242ac110008 pods took: 300.8616ms STEP: Cleaning up the configMaps [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 17 12:21:55.299: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-wrapper-nz6c4" for this suite. Feb 17 12:22:05.419: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 17 12:22:05.572: INFO: namespace: e2e-tests-emptydir-wrapper-nz6c4, resource: bindings, ignored listing per whitelist Feb 17 12:22:05.592: INFO: namespace e2e-tests-emptydir-wrapper-nz6c4 deletion completed in 10.236211505s • [SLOW TEST:484.591 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 should not cause race condition when used for configmaps [Serial] [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 17 12:22:05.592: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74 STEP: Creating service test in namespace e2e-tests-statefulset-jnpfr [It] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a new StatefulSet Feb 17 12:22:06.068: INFO: Found 0 stateful pods, waiting for 3 Feb 17 12:22:16.087: INFO: Found 1 stateful pods, waiting for 3 Feb 17 12:22:26.092: INFO: Found 2 stateful pods, waiting for 3 Feb 17 12:22:36.157: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Feb 17 12:22:36.157: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Feb 17 12:22:36.157: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false Feb 17 12:22:46.096: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Feb 17 12:22:46.096: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Feb 17 12:22:46.097: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true Feb 17 12:22:46.134: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-jnpfr ss2-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Feb 17 12:22:46.745: INFO: stderr: "I0217 12:22:46.308748 4048 log.go:172] (0xc0005b82c0) (0xc0008646e0) Create stream\nI0217 12:22:46.308897 4048 log.go:172] (0xc0005b82c0) (0xc0008646e0) Stream added, broadcasting: 1\nI0217 12:22:46.314980 4048 log.go:172] (0xc0005b82c0) Reply frame received for 1\nI0217 12:22:46.315008 4048 log.go:172] (0xc0005b82c0) (0xc0006d8000) Create stream\nI0217 12:22:46.315015 4048 log.go:172] (0xc0005b82c0) (0xc0006d8000) Stream added, broadcasting: 3\nI0217 12:22:46.315867 4048 log.go:172] (0xc0005b82c0) Reply frame received for 3\nI0217 12:22:46.315890 4048 log.go:172] (0xc0005b82c0) (0xc0003f2aa0) Create stream\nI0217 12:22:46.315897 4048 log.go:172] (0xc0005b82c0) (0xc0003f2aa0) Stream added, broadcasting: 5\nI0217 12:22:46.316816 4048 log.go:172] (0xc0005b82c0) Reply frame received for 5\nI0217 12:22:46.598783 4048 log.go:172] (0xc0005b82c0) Data frame received for 3\nI0217 12:22:46.598818 4048 log.go:172] (0xc0006d8000) (3) Data frame handling\nI0217 12:22:46.598836 4048 log.go:172] (0xc0006d8000) (3) Data frame sent\nI0217 12:22:46.739448 4048 log.go:172] (0xc0005b82c0) (0xc0006d8000) Stream removed, broadcasting: 3\nI0217 12:22:46.739546 4048 log.go:172] (0xc0005b82c0) Data frame received for 1\nI0217 12:22:46.739571 4048 log.go:172] (0xc0005b82c0) (0xc0003f2aa0) Stream removed, broadcasting: 5\nI0217 12:22:46.739592 4048 log.go:172] (0xc0008646e0) (1) Data frame handling\nI0217 12:22:46.739603 4048 log.go:172] (0xc0008646e0) (1) Data frame sent\nI0217 12:22:46.739611 4048 log.go:172] (0xc0005b82c0) (0xc0008646e0) Stream removed, broadcasting: 1\nI0217 12:22:46.739680 4048 log.go:172] (0xc0005b82c0) Go away received\nI0217 12:22:46.739705 4048 log.go:172] (0xc0005b82c0) (0xc0008646e0) Stream removed, broadcasting: 1\nI0217 12:22:46.739713 4048 log.go:172] (0xc0005b82c0) (0xc0006d8000) Stream removed, broadcasting: 3\nI0217 12:22:46.739719 4048 log.go:172] (0xc0005b82c0) (0xc0003f2aa0) Stream removed, broadcasting: 5\n" Feb 17 12:22:46.746: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Feb 17 12:22:46.746: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' STEP: Updating StatefulSet template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine Feb 17 12:22:56.814: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Updating Pods in reverse ordinal order Feb 17 12:23:07.528: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-jnpfr ss2-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 17 12:23:08.045: INFO: stderr: "I0217 12:23:07.723517 4070 log.go:172] (0xc0006f8370) (0xc0007ae640) Create stream\nI0217 12:23:07.723649 4070 log.go:172] (0xc0006f8370) (0xc0007ae640) Stream added, broadcasting: 1\nI0217 12:23:07.731706 4070 log.go:172] (0xc0006f8370) Reply frame received for 1\nI0217 12:23:07.731804 4070 log.go:172] (0xc0006f8370) (0xc000770be0) Create stream\nI0217 12:23:07.731823 4070 log.go:172] (0xc0006f8370) (0xc000770be0) Stream added, broadcasting: 3\nI0217 12:23:07.733178 4070 log.go:172] (0xc0006f8370) Reply frame received for 3\nI0217 12:23:07.733197 4070 log.go:172] (0xc0006f8370) (0xc000770d20) Create stream\nI0217 12:23:07.733202 4070 log.go:172] (0xc0006f8370) (0xc000770d20) Stream added, broadcasting: 5\nI0217 12:23:07.734662 4070 log.go:172] (0xc0006f8370) Reply frame received for 5\nI0217 12:23:07.879593 4070 log.go:172] (0xc0006f8370) Data frame received for 3\nI0217 12:23:07.879709 4070 log.go:172] (0xc000770be0) (3) Data frame handling\nI0217 12:23:07.879733 4070 log.go:172] (0xc000770be0) (3) Data frame sent\nI0217 12:23:08.037307 4070 log.go:172] (0xc0006f8370) (0xc000770be0) Stream removed, broadcasting: 3\nI0217 12:23:08.037557 4070 log.go:172] (0xc0006f8370) Data frame received for 1\nI0217 12:23:08.037783 4070 log.go:172] (0xc0006f8370) (0xc000770d20) Stream removed, broadcasting: 5\nI0217 12:23:08.038062 4070 log.go:172] (0xc0007ae640) (1) Data frame handling\nI0217 12:23:08.038101 4070 log.go:172] (0xc0007ae640) (1) Data frame sent\nI0217 12:23:08.038157 4070 log.go:172] (0xc0006f8370) (0xc0007ae640) Stream removed, broadcasting: 1\nI0217 12:23:08.038196 4070 log.go:172] (0xc0006f8370) Go away received\nI0217 12:23:08.038584 4070 log.go:172] (0xc0006f8370) (0xc0007ae640) Stream removed, broadcasting: 1\nI0217 12:23:08.038643 4070 log.go:172] (0xc0006f8370) (0xc000770be0) Stream removed, broadcasting: 3\nI0217 12:23:08.038689 4070 log.go:172] (0xc0006f8370) (0xc000770d20) Stream removed, broadcasting: 5\n" Feb 17 12:23:08.045: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Feb 17 12:23:08.045: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Feb 17 12:23:19.386: INFO: Waiting for StatefulSet e2e-tests-statefulset-jnpfr/ss2 to complete update Feb 17 12:23:19.386: INFO: Waiting for Pod e2e-tests-statefulset-jnpfr/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Feb 17 12:23:19.386: INFO: Waiting for Pod e2e-tests-statefulset-jnpfr/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Feb 17 12:23:19.386: INFO: Waiting for Pod e2e-tests-statefulset-jnpfr/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Feb 17 12:23:29.403: INFO: Waiting for StatefulSet e2e-tests-statefulset-jnpfr/ss2 to complete update Feb 17 12:23:29.403: INFO: Waiting for Pod e2e-tests-statefulset-jnpfr/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Feb 17 12:23:29.403: INFO: Waiting for Pod e2e-tests-statefulset-jnpfr/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Feb 17 12:23:39.409: INFO: Waiting for StatefulSet e2e-tests-statefulset-jnpfr/ss2 to complete update Feb 17 12:23:39.409: INFO: Waiting for Pod e2e-tests-statefulset-jnpfr/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Feb 17 12:23:39.409: INFO: Waiting for Pod e2e-tests-statefulset-jnpfr/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Feb 17 12:23:49.464: INFO: Waiting for StatefulSet e2e-tests-statefulset-jnpfr/ss2 to complete update Feb 17 12:23:49.465: INFO: Waiting for Pod e2e-tests-statefulset-jnpfr/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Feb 17 12:23:59.415: INFO: Waiting for StatefulSet e2e-tests-statefulset-jnpfr/ss2 to complete update Feb 17 12:24:09.413: INFO: Waiting for StatefulSet e2e-tests-statefulset-jnpfr/ss2 to complete update STEP: Rolling back to a previous revision Feb 17 12:24:19.420: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-jnpfr ss2-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Feb 17 12:24:20.094: INFO: stderr: "I0217 12:24:19.577407 4092 log.go:172] (0xc000682370) (0xc0006a2640) Create stream\nI0217 12:24:19.577652 4092 log.go:172] (0xc000682370) (0xc0006a2640) Stream added, broadcasting: 1\nI0217 12:24:19.583220 4092 log.go:172] (0xc000682370) Reply frame received for 1\nI0217 12:24:19.583275 4092 log.go:172] (0xc000682370) (0xc00060cc80) Create stream\nI0217 12:24:19.583285 4092 log.go:172] (0xc000682370) (0xc00060cc80) Stream added, broadcasting: 3\nI0217 12:24:19.584625 4092 log.go:172] (0xc000682370) Reply frame received for 3\nI0217 12:24:19.584659 4092 log.go:172] (0xc000682370) (0xc000680000) Create stream\nI0217 12:24:19.584671 4092 log.go:172] (0xc000682370) (0xc000680000) Stream added, broadcasting: 5\nI0217 12:24:19.585797 4092 log.go:172] (0xc000682370) Reply frame received for 5\nI0217 12:24:19.903386 4092 log.go:172] (0xc000682370) Data frame received for 3\nI0217 12:24:19.903806 4092 log.go:172] (0xc00060cc80) (3) Data frame handling\nI0217 12:24:19.903893 4092 log.go:172] (0xc00060cc80) (3) Data frame sent\nI0217 12:24:20.087156 4092 log.go:172] (0xc000682370) Data frame received for 1\nI0217 12:24:20.087235 4092 log.go:172] (0xc0006a2640) (1) Data frame handling\nI0217 12:24:20.087244 4092 log.go:172] (0xc0006a2640) (1) Data frame sent\nI0217 12:24:20.087258 4092 log.go:172] (0xc000682370) (0xc0006a2640) Stream removed, broadcasting: 1\nI0217 12:24:20.088044 4092 log.go:172] (0xc000682370) (0xc00060cc80) Stream removed, broadcasting: 3\nI0217 12:24:20.088546 4092 log.go:172] (0xc000682370) (0xc000680000) Stream removed, broadcasting: 5\nI0217 12:24:20.088589 4092 log.go:172] (0xc000682370) (0xc0006a2640) Stream removed, broadcasting: 1\nI0217 12:24:20.088596 4092 log.go:172] (0xc000682370) (0xc00060cc80) Stream removed, broadcasting: 3\nI0217 12:24:20.088601 4092 log.go:172] (0xc000682370) (0xc000680000) Stream removed, broadcasting: 5\n" Feb 17 12:24:20.095: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Feb 17 12:24:20.095: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Feb 17 12:24:30.284: INFO: Updating stateful set ss2 STEP: Rolling back update in reverse ordinal order Feb 17 12:24:40.542: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-jnpfr ss2-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 17 12:24:41.159: INFO: stderr: "I0217 12:24:40.818310 4112 log.go:172] (0xc00085e210) (0xc00085a5a0) Create stream\nI0217 12:24:40.818464 4112 log.go:172] (0xc00085e210) (0xc00085a5a0) Stream added, broadcasting: 1\nI0217 12:24:40.826862 4112 log.go:172] (0xc00085e210) Reply frame received for 1\nI0217 12:24:40.826898 4112 log.go:172] (0xc00085e210) (0xc00085a640) Create stream\nI0217 12:24:40.826908 4112 log.go:172] (0xc00085e210) (0xc00085a640) Stream added, broadcasting: 3\nI0217 12:24:40.830051 4112 log.go:172] (0xc00085e210) Reply frame received for 3\nI0217 12:24:40.830075 4112 log.go:172] (0xc00085e210) (0xc00037cd20) Create stream\nI0217 12:24:40.830086 4112 log.go:172] (0xc00085e210) (0xc00037cd20) Stream added, broadcasting: 5\nI0217 12:24:40.831184 4112 log.go:172] (0xc00085e210) Reply frame received for 5\nI0217 12:24:41.019628 4112 log.go:172] (0xc00085e210) Data frame received for 3\nI0217 12:24:41.019671 4112 log.go:172] (0xc00085a640) (3) Data frame handling\nI0217 12:24:41.019686 4112 log.go:172] (0xc00085a640) (3) Data frame sent\nI0217 12:24:41.153864 4112 log.go:172] (0xc00085e210) Data frame received for 1\nI0217 12:24:41.153967 4112 log.go:172] (0xc00085e210) (0xc00037cd20) Stream removed, broadcasting: 5\nI0217 12:24:41.153997 4112 log.go:172] (0xc00085a5a0) (1) Data frame handling\nI0217 12:24:41.154007 4112 log.go:172] (0xc00085a5a0) (1) Data frame sent\nI0217 12:24:41.154029 4112 log.go:172] (0xc00085e210) (0xc00085a640) Stream removed, broadcasting: 3\nI0217 12:24:41.154042 4112 log.go:172] (0xc00085e210) (0xc00085a5a0) Stream removed, broadcasting: 1\nI0217 12:24:41.154145 4112 log.go:172] (0xc00085e210) (0xc00085a5a0) Stream removed, broadcasting: 1\nI0217 12:24:41.154157 4112 log.go:172] (0xc00085e210) (0xc00085a640) Stream removed, broadcasting: 3\nI0217 12:24:41.154164 4112 log.go:172] (0xc00085e210) (0xc00037cd20) Stream removed, broadcasting: 5\nI0217 12:24:41.154347 4112 log.go:172] (0xc00085e210) Go away received\n" Feb 17 12:24:41.159: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Feb 17 12:24:41.159: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Feb 17 12:24:51.279: INFO: Waiting for StatefulSet e2e-tests-statefulset-jnpfr/ss2 to complete update Feb 17 12:24:51.279: INFO: Waiting for Pod e2e-tests-statefulset-jnpfr/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Feb 17 12:24:51.279: INFO: Waiting for Pod e2e-tests-statefulset-jnpfr/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Feb 17 12:25:01.298: INFO: Waiting for StatefulSet e2e-tests-statefulset-jnpfr/ss2 to complete update Feb 17 12:25:01.298: INFO: Waiting for Pod e2e-tests-statefulset-jnpfr/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Feb 17 12:25:01.298: INFO: Waiting for Pod e2e-tests-statefulset-jnpfr/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Feb 17 12:25:11.771: INFO: Waiting for StatefulSet e2e-tests-statefulset-jnpfr/ss2 to complete update Feb 17 12:25:11.771: INFO: Waiting for Pod e2e-tests-statefulset-jnpfr/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Feb 17 12:25:21.316: INFO: Waiting for StatefulSet e2e-tests-statefulset-jnpfr/ss2 to complete update Feb 17 12:25:21.317: INFO: Waiting for Pod e2e-tests-statefulset-jnpfr/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Feb 17 12:25:31.304: INFO: Waiting for StatefulSet e2e-tests-statefulset-jnpfr/ss2 to complete update [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85 Feb 17 12:25:41.345: INFO: Deleting all statefulset in ns e2e-tests-statefulset-jnpfr Feb 17 12:25:41.353: INFO: Scaling statefulset ss2 to 0 Feb 17 12:26:11.453: INFO: Waiting for statefulset status.replicas updated to 0 Feb 17 12:26:11.460: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 17 12:26:11.502: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-statefulset-jnpfr" for this suite. Feb 17 12:26:19.804: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 17 12:26:19.849: INFO: namespace: e2e-tests-statefulset-jnpfr, resource: bindings, ignored listing per whitelist Feb 17 12:26:20.013: INFO: namespace e2e-tests-statefulset-jnpfr deletion completed in 8.493257295s • [SLOW TEST:254.421 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 17 12:26:20.013: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 17 12:26:20.399: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubelet-test-ng94g" for this suite. Feb 17 12:26:28.481: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 17 12:26:28.583: INFO: namespace: e2e-tests-kubelet-test-ng94g, resource: bindings, ignored listing per whitelist Feb 17 12:26:28.687: INFO: namespace e2e-tests-kubelet-test-ng94g deletion completed in 8.277632887s • [SLOW TEST:8.673 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78 should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl patch should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 17 12:26:28.687: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating Redis RC Feb 17 12:26:28.973: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-6vglf' Feb 17 12:26:31.663: INFO: stderr: "" Feb 17 12:26:31.663: INFO: stdout: "replicationcontroller/redis-master created\n" STEP: Waiting for Redis master to start. Feb 17 12:26:33.236: INFO: Selector matched 1 pods for map[app:redis] Feb 17 12:26:33.237: INFO: Found 0 / 1 Feb 17 12:26:33.683: INFO: Selector matched 1 pods for map[app:redis] Feb 17 12:26:33.683: INFO: Found 0 / 1 Feb 17 12:26:34.678: INFO: Selector matched 1 pods for map[app:redis] Feb 17 12:26:34.678: INFO: Found 0 / 1 Feb 17 12:26:35.673: INFO: Selector matched 1 pods for map[app:redis] Feb 17 12:26:35.673: INFO: Found 0 / 1 Feb 17 12:26:38.061: INFO: Selector matched 1 pods for map[app:redis] Feb 17 12:26:38.061: INFO: Found 0 / 1 Feb 17 12:26:38.715: INFO: Selector matched 1 pods for map[app:redis] Feb 17 12:26:38.716: INFO: Found 0 / 1 Feb 17 12:26:39.697: INFO: Selector matched 1 pods for map[app:redis] Feb 17 12:26:39.697: INFO: Found 0 / 1 Feb 17 12:26:41.061: INFO: Selector matched 1 pods for map[app:redis] Feb 17 12:26:41.061: INFO: Found 0 / 1 Feb 17 12:26:41.676: INFO: Selector matched 1 pods for map[app:redis] Feb 17 12:26:41.676: INFO: Found 0 / 1 Feb 17 12:26:42.671: INFO: Selector matched 1 pods for map[app:redis] Feb 17 12:26:42.671: INFO: Found 1 / 1 Feb 17 12:26:42.671: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 STEP: patching all pods Feb 17 12:26:42.674: INFO: Selector matched 1 pods for map[app:redis] Feb 17 12:26:42.674: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Feb 17 12:26:42.674: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config patch pod redis-master-8bg8n --namespace=e2e-tests-kubectl-6vglf -p {"metadata":{"annotations":{"x":"y"}}}' Feb 17 12:26:42.806: INFO: stderr: "" Feb 17 12:26:42.806: INFO: stdout: "pod/redis-master-8bg8n patched\n" STEP: checking annotations Feb 17 12:26:42.849: INFO: Selector matched 1 pods for map[app:redis] Feb 17 12:26:42.849: INFO: ForEach: Found 1 pods from the filter. Now looping through them. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 17 12:26:42.849: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-6vglf" for this suite. Feb 17 12:27:06.990: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 17 12:27:07.131: INFO: namespace: e2e-tests-kubectl-6vglf, resource: bindings, ignored listing per whitelist Feb 17 12:27:07.184: INFO: namespace e2e-tests-kubectl-6vglf deletion completed in 24.259828146s • [SLOW TEST:38.497 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl patch /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 17 12:27:07.184: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name cm-test-opt-del-cfea3169-5180-11ea-a180-0242ac110008 STEP: Creating configMap with name cm-test-opt-upd-cfea3230-5180-11ea-a180-0242ac110008 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-cfea3169-5180-11ea-a180-0242ac110008 STEP: Updating configmap cm-test-opt-upd-cfea3230-5180-11ea-a180-0242ac110008 STEP: Creating configMap with name cm-test-opt-create-cfea325d-5180-11ea-a180-0242ac110008 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 17 12:28:34.772: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-lcf6l" for this suite. Feb 17 12:28:59.082: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 17 12:28:59.144: INFO: namespace: e2e-tests-projected-lcf6l, resource: bindings, ignored listing per whitelist Feb 17 12:28:59.233: INFO: namespace e2e-tests-projected-lcf6l deletion completed in 24.441648113s • [SLOW TEST:112.049 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 17 12:28:59.234: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0666 on tmpfs Feb 17 12:28:59.421: INFO: Waiting up to 5m0s for pod "pod-12abf932-5181-11ea-a180-0242ac110008" in namespace "e2e-tests-emptydir-ll57t" to be "success or failure" Feb 17 12:28:59.426: INFO: Pod "pod-12abf932-5181-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 5.39147ms Feb 17 12:29:01.745: INFO: Pod "pod-12abf932-5181-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.324349365s Feb 17 12:29:03.754: INFO: Pod "pod-12abf932-5181-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.332616899s Feb 17 12:29:05.765: INFO: Pod "pod-12abf932-5181-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.343471397s Feb 17 12:29:08.932: INFO: Pod "pod-12abf932-5181-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 9.510876674s Feb 17 12:29:10.944: INFO: Pod "pod-12abf932-5181-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 11.522767937s Feb 17 12:29:12.955: INFO: Pod "pod-12abf932-5181-11ea-a180-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.533871471s STEP: Saw pod success Feb 17 12:29:12.955: INFO: Pod "pod-12abf932-5181-11ea-a180-0242ac110008" satisfied condition "success or failure" Feb 17 12:29:12.961: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-12abf932-5181-11ea-a180-0242ac110008 container test-container: STEP: delete the pod Feb 17 12:29:13.656: INFO: Waiting for pod pod-12abf932-5181-11ea-a180-0242ac110008 to disappear Feb 17 12:29:13.665: INFO: Pod pod-12abf932-5181-11ea-a180-0242ac110008 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 17 12:29:13.665: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-ll57t" for this suite. Feb 17 12:29:22.451: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 17 12:29:22.607: INFO: namespace: e2e-tests-emptydir-ll57t, resource: bindings, ignored listing per whitelist Feb 17 12:29:22.614: INFO: namespace e2e-tests-emptydir-ll57t deletion completed in 8.941028328s • [SLOW TEST:23.381 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 17 12:29:22.614: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102 [It] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Feb 17 12:29:22.824: INFO: Number of nodes with available pods: 0 Feb 17 12:29:22.824: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 17 12:29:23.845: INFO: Number of nodes with available pods: 0 Feb 17 12:29:23.845: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 17 12:29:25.323: INFO: Number of nodes with available pods: 0 Feb 17 12:29:25.323: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 17 12:29:25.843: INFO: Number of nodes with available pods: 0 Feb 17 12:29:25.843: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 17 12:29:26.840: INFO: Number of nodes with available pods: 0 Feb 17 12:29:26.840: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 17 12:29:28.092: INFO: Number of nodes with available pods: 0 Feb 17 12:29:28.092: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 17 12:29:28.838: INFO: Number of nodes with available pods: 0 Feb 17 12:29:28.838: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 17 12:29:29.836: INFO: Number of nodes with available pods: 0 Feb 17 12:29:29.836: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 17 12:29:30.835: INFO: Number of nodes with available pods: 1 Feb 17 12:29:30.835: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Stop a daemon pod, check that the daemon pod is revived. Feb 17 12:29:30.868: INFO: Number of nodes with available pods: 0 Feb 17 12:29:30.868: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 17 12:29:31.887: INFO: Number of nodes with available pods: 0 Feb 17 12:29:31.887: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 17 12:29:32.924: INFO: Number of nodes with available pods: 0 Feb 17 12:29:32.925: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 17 12:29:33.888: INFO: Number of nodes with available pods: 0 Feb 17 12:29:33.888: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 17 12:29:34.887: INFO: Number of nodes with available pods: 0 Feb 17 12:29:34.887: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 17 12:29:35.893: INFO: Number of nodes with available pods: 0 Feb 17 12:29:35.893: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 17 12:29:36.886: INFO: Number of nodes with available pods: 0 Feb 17 12:29:36.886: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 17 12:29:37.893: INFO: Number of nodes with available pods: 0 Feb 17 12:29:37.893: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 17 12:29:38.896: INFO: Number of nodes with available pods: 0 Feb 17 12:29:38.896: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 17 12:29:39.900: INFO: Number of nodes with available pods: 0 Feb 17 12:29:39.900: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 17 12:29:40.896: INFO: Number of nodes with available pods: 0 Feb 17 12:29:40.896: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 17 12:29:41.905: INFO: Number of nodes with available pods: 0 Feb 17 12:29:41.906: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 17 12:29:42.950: INFO: Number of nodes with available pods: 0 Feb 17 12:29:42.950: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 17 12:29:44.233: INFO: Number of nodes with available pods: 0 Feb 17 12:29:44.233: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 17 12:29:44.923: INFO: Number of nodes with available pods: 0 Feb 17 12:29:44.923: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 17 12:29:45.885: INFO: Number of nodes with available pods: 0 Feb 17 12:29:45.885: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 17 12:29:46.925: INFO: Number of nodes with available pods: 0 Feb 17 12:29:46.925: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 17 12:29:48.418: INFO: Number of nodes with available pods: 0 Feb 17 12:29:48.418: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 17 12:29:48.901: INFO: Number of nodes with available pods: 0 Feb 17 12:29:48.901: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 17 12:29:49.928: INFO: Number of nodes with available pods: 0 Feb 17 12:29:49.928: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 17 12:29:50.922: INFO: Number of nodes with available pods: 0 Feb 17 12:29:50.923: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 17 12:29:51.927: INFO: Number of nodes with available pods: 1 Feb 17 12:29:51.927: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-mn698, will wait for the garbage collector to delete the pods Feb 17 12:29:52.001: INFO: Deleting DaemonSet.extensions daemon-set took: 14.514743ms Feb 17 12:29:52.201: INFO: Terminating DaemonSet.extensions daemon-set pods took: 200.827343ms Feb 17 12:30:02.633: INFO: Number of nodes with available pods: 0 Feb 17 12:30:02.633: INFO: Number of running nodes: 0, number of available pods: 0 Feb 17 12:30:02.643: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-mn698/daemonsets","resourceVersion":"21979151"},"items":null} Feb 17 12:30:02.652: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-mn698/pods","resourceVersion":"21979151"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 17 12:30:02.679: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-daemonsets-mn698" for this suite. Feb 17 12:30:08.801: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 17 12:30:08.834: INFO: namespace: e2e-tests-daemonsets-mn698, resource: bindings, ignored listing per whitelist Feb 17 12:30:09.015: INFO: namespace e2e-tests-daemonsets-mn698 deletion completed in 6.326898686s • [SLOW TEST:46.400 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 17 12:30:09.015: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-3c4d1db6-5181-11ea-a180-0242ac110008 STEP: Creating a pod to test consume configMaps Feb 17 12:30:09.300: INFO: Waiting up to 5m0s for pod "pod-configmaps-3c4f2931-5181-11ea-a180-0242ac110008" in namespace "e2e-tests-configmap-twjlm" to be "success or failure" Feb 17 12:30:09.472: INFO: Pod "pod-configmaps-3c4f2931-5181-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 172.065579ms Feb 17 12:30:11.567: INFO: Pod "pod-configmaps-3c4f2931-5181-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.267407938s Feb 17 12:30:13.610: INFO: Pod "pod-configmaps-3c4f2931-5181-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.310333798s Feb 17 12:30:15.765: INFO: Pod "pod-configmaps-3c4f2931-5181-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.464749574s Feb 17 12:30:17.779: INFO: Pod "pod-configmaps-3c4f2931-5181-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.479630032s Feb 17 12:30:20.308: INFO: Pod "pod-configmaps-3c4f2931-5181-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 11.008626858s Feb 17 12:30:22.387: INFO: Pod "pod-configmaps-3c4f2931-5181-11ea-a180-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.087021931s STEP: Saw pod success Feb 17 12:30:22.387: INFO: Pod "pod-configmaps-3c4f2931-5181-11ea-a180-0242ac110008" satisfied condition "success or failure" Feb 17 12:30:22.669: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-3c4f2931-5181-11ea-a180-0242ac110008 container configmap-volume-test: STEP: delete the pod Feb 17 12:30:22.740: INFO: Waiting for pod pod-configmaps-3c4f2931-5181-11ea-a180-0242ac110008 to disappear Feb 17 12:30:22.746: INFO: Pod pod-configmaps-3c4f2931-5181-11ea-a180-0242ac110008 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 17 12:30:22.746: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-twjlm" for this suite. Feb 17 12:30:28.846: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 17 12:30:28.888: INFO: namespace: e2e-tests-configmap-twjlm, resource: bindings, ignored listing per whitelist Feb 17 12:30:29.002: INFO: namespace e2e-tests-configmap-twjlm deletion completed in 6.250361416s • [SLOW TEST:19.987 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 17 12:30:29.003: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a watch on configmaps with a certain label STEP: creating a new configmap STEP: modifying the configmap once STEP: changing the label value of the configmap STEP: Expecting to observe a delete notification for the watched object Feb 17 12:30:29.236: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-h9dzj,SelfLink:/api/v1/namespaces/e2e-tests-watch-h9dzj/configmaps/e2e-watch-test-label-changed,UID:483209ff-5181-11ea-a994-fa163e34d433,ResourceVersion:21979228,Generation:0,CreationTimestamp:2020-02-17 12:30:29 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} Feb 17 12:30:29.237: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-h9dzj,SelfLink:/api/v1/namespaces/e2e-tests-watch-h9dzj/configmaps/e2e-watch-test-label-changed,UID:483209ff-5181-11ea-a994-fa163e34d433,ResourceVersion:21979229,Generation:0,CreationTimestamp:2020-02-17 12:30:29 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} Feb 17 12:30:29.237: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-h9dzj,SelfLink:/api/v1/namespaces/e2e-tests-watch-h9dzj/configmaps/e2e-watch-test-label-changed,UID:483209ff-5181-11ea-a994-fa163e34d433,ResourceVersion:21979230,Generation:0,CreationTimestamp:2020-02-17 12:30:29 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements STEP: changing the label value of the configmap back STEP: modifying the configmap a third time STEP: deleting the configmap STEP: Expecting to observe an add notification for the watched object when the label value was restored Feb 17 12:30:39.312: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-h9dzj,SelfLink:/api/v1/namespaces/e2e-tests-watch-h9dzj/configmaps/e2e-watch-test-label-changed,UID:483209ff-5181-11ea-a994-fa163e34d433,ResourceVersion:21979244,Generation:0,CreationTimestamp:2020-02-17 12:30:29 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Feb 17 12:30:39.313: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-h9dzj,SelfLink:/api/v1/namespaces/e2e-tests-watch-h9dzj/configmaps/e2e-watch-test-label-changed,UID:483209ff-5181-11ea-a994-fa163e34d433,ResourceVersion:21979245,Generation:0,CreationTimestamp:2020-02-17 12:30:29 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} Feb 17 12:30:39.313: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-h9dzj,SelfLink:/api/v1/namespaces/e2e-tests-watch-h9dzj/configmaps/e2e-watch-test-label-changed,UID:483209ff-5181-11ea-a994-fa163e34d433,ResourceVersion:21979246,Generation:0,CreationTimestamp:2020-02-17 12:30:29 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 17 12:30:39.313: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-watch-h9dzj" for this suite. Feb 17 12:30:45.388: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 17 12:30:45.447: INFO: namespace: e2e-tests-watch-h9dzj, resource: bindings, ignored listing per whitelist Feb 17 12:30:45.593: INFO: namespace e2e-tests-watch-h9dzj deletion completed in 6.270457966s • [SLOW TEST:16.591 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 17 12:30:45.594: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for all rs to be garbage collected STEP: expected 0 rs, got 1 rs STEP: expected 0 pods, got 2 pods STEP: Gathering metrics W0217 12:30:48.817493 8 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Feb 17 12:30:48.817: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 17 12:30:48.818: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-725wq" for this suite. Feb 17 12:30:54.944: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 17 12:30:56.756: INFO: namespace: e2e-tests-gc-725wq, resource: bindings, ignored listing per whitelist Feb 17 12:30:56.813: INFO: namespace e2e-tests-gc-725wq deletion completed in 7.970161607s • [SLOW TEST:11.219 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 17 12:30:56.813: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-8ft2l Feb 17 12:31:06.992: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-8ft2l STEP: checking the pod's current state and verifying that restartCount is present Feb 17 12:31:06.999: INFO: Initial restart count of pod liveness-http is 0 Feb 17 12:31:35.622: INFO: Restart count of pod e2e-tests-container-probe-8ft2l/liveness-http is now 1 (28.622042437s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 17 12:31:35.662: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-8ft2l" for this suite. Feb 17 12:31:41.854: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 17 12:31:41.980: INFO: namespace: e2e-tests-container-probe-8ft2l, resource: bindings, ignored listing per whitelist Feb 17 12:31:42.030: INFO: namespace e2e-tests-container-probe-8ft2l deletion completed in 6.34438396s • [SLOW TEST:45.217 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 17 12:31:42.031: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-2zsc8 Feb 17 12:31:52.549: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-2zsc8 STEP: checking the pod's current state and verifying that restartCount is present Feb 17 12:31:52.561: INFO: Initial restart count of pod liveness-http is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 17 12:35:54.429: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-2zsc8" for this suite. Feb 17 12:36:02.610: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 17 12:36:02.826: INFO: namespace: e2e-tests-container-probe-2zsc8, resource: bindings, ignored listing per whitelist Feb 17 12:36:02.893: INFO: namespace e2e-tests-container-probe-2zsc8 deletion completed in 8.4480166s • [SLOW TEST:260.863 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 17 12:36:02.894: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-map-0f5670a7-5182-11ea-a180-0242ac110008 STEP: Creating a pod to test consume configMaps Feb 17 12:36:03.357: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-0f5aab74-5182-11ea-a180-0242ac110008" in namespace "e2e-tests-projected-698d9" to be "success or failure" Feb 17 12:36:03.375: INFO: Pod "pod-projected-configmaps-0f5aab74-5182-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 17.803686ms Feb 17 12:36:05.773: INFO: Pod "pod-projected-configmaps-0f5aab74-5182-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.414966851s Feb 17 12:36:07.787: INFO: Pod "pod-projected-configmaps-0f5aab74-5182-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.429393522s Feb 17 12:36:09.934: INFO: Pod "pod-projected-configmaps-0f5aab74-5182-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.576263362s Feb 17 12:36:11.963: INFO: Pod "pod-projected-configmaps-0f5aab74-5182-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.605646059s Feb 17 12:36:13.981: INFO: Pod "pod-projected-configmaps-0f5aab74-5182-11ea-a180-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.623634309s STEP: Saw pod success Feb 17 12:36:13.981: INFO: Pod "pod-projected-configmaps-0f5aab74-5182-11ea-a180-0242ac110008" satisfied condition "success or failure" Feb 17 12:36:13.987: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-0f5aab74-5182-11ea-a180-0242ac110008 container projected-configmap-volume-test: STEP: delete the pod Feb 17 12:36:15.087: INFO: Waiting for pod pod-projected-configmaps-0f5aab74-5182-11ea-a180-0242ac110008 to disappear Feb 17 12:36:15.111: INFO: Pod pod-projected-configmaps-0f5aab74-5182-11ea-a180-0242ac110008 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 17 12:36:15.111: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-698d9" for this suite. Feb 17 12:36:21.642: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 17 12:36:21.865: INFO: namespace: e2e-tests-projected-698d9, resource: bindings, ignored listing per whitelist Feb 17 12:36:21.876: INFO: namespace e2e-tests-projected-698d9 deletion completed in 6.300681671s • [SLOW TEST:18.982 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 17 12:36:21.876: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Feb 17 12:36:22.225: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version --client' Feb 17 12:36:22.325: INFO: stderr: "" Feb 17 12:36:22.325: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"13\", GitVersion:\"v1.13.12\", GitCommit:\"a8b52209ee172232b6db7a6e0ce2adc77458829f\", GitTreeState:\"clean\", BuildDate:\"2019-12-22T15:53:48Z\", GoVersion:\"go1.11.13\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" Feb 17 12:36:22.335: INFO: Not supported for server versions before "1.13.12" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 17 12:36:22.336: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-6p589" for this suite. Feb 17 12:36:28.391: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 17 12:36:28.427: INFO: namespace: e2e-tests-kubectl-6p589, resource: bindings, ignored listing per whitelist Feb 17 12:36:28.759: INFO: namespace e2e-tests-kubectl-6p589 deletion completed in 6.404790063s S [SKIPPING] [6.883 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl describe /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should check if kubectl describe prints relevant information for rc and pods [Conformance] [It] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Feb 17 12:36:22.335: Not supported for server versions before "1.13.12" /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:292 ------------------------------ SSSSS ------------------------------ [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 17 12:36:28.759: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a new configmap STEP: modifying the configmap once STEP: modifying the configmap a second time STEP: deleting the configmap STEP: creating a watch on configmaps from the resource version returned by the first update STEP: Expecting to observe notifications for all changes to the configmap after the first update Feb 17 12:36:29.202: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:e2e-tests-watch-ddx8k,SelfLink:/api/v1/namespaces/e2e-tests-watch-ddx8k/configmaps/e2e-watch-test-resource-version,UID:1eaa331a-5182-11ea-a994-fa163e34d433,ResourceVersion:21979800,Generation:0,CreationTimestamp:2020-02-17 12:36:29 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Feb 17 12:36:29.203: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:e2e-tests-watch-ddx8k,SelfLink:/api/v1/namespaces/e2e-tests-watch-ddx8k/configmaps/e2e-watch-test-resource-version,UID:1eaa331a-5182-11ea-a994-fa163e34d433,ResourceVersion:21979801,Generation:0,CreationTimestamp:2020-02-17 12:36:29 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 17 12:36:29.203: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-watch-ddx8k" for this suite. Feb 17 12:36:35.244: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 17 12:36:35.272: INFO: namespace: e2e-tests-watch-ddx8k, resource: bindings, ignored listing per whitelist Feb 17 12:36:35.409: INFO: namespace e2e-tests-watch-ddx8k deletion completed in 6.199475583s • [SLOW TEST:6.650 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 17 12:36:35.409: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-xx7vf STEP: creating a selector STEP: Creating the service pods in kubernetes Feb 17 12:36:35.620: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Feb 17 12:37:14.016: INFO: ExecWithOptions {Command:[/bin/sh -c echo 'hostName' | nc -w 1 -u 10.32.0.4 8081 | grep -v '^\s*$'] Namespace:e2e-tests-pod-network-test-xx7vf PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 17 12:37:14.016: INFO: >>> kubeConfig: /root/.kube/config I0217 12:37:14.129031 8 log.go:172] (0xc001e5a370) (0xc00269ab40) Create stream I0217 12:37:14.129589 8 log.go:172] (0xc001e5a370) (0xc00269ab40) Stream added, broadcasting: 1 I0217 12:37:14.138271 8 log.go:172] (0xc001e5a370) Reply frame received for 1 I0217 12:37:14.138308 8 log.go:172] (0xc001e5a370) (0xc00269abe0) Create stream I0217 12:37:14.138314 8 log.go:172] (0xc001e5a370) (0xc00269abe0) Stream added, broadcasting: 3 I0217 12:37:14.139096 8 log.go:172] (0xc001e5a370) Reply frame received for 3 I0217 12:37:14.139123 8 log.go:172] (0xc001e5a370) (0xc002162460) Create stream I0217 12:37:14.139133 8 log.go:172] (0xc001e5a370) (0xc002162460) Stream added, broadcasting: 5 I0217 12:37:14.140151 8 log.go:172] (0xc001e5a370) Reply frame received for 5 I0217 12:37:15.281878 8 log.go:172] (0xc001e5a370) Data frame received for 3 I0217 12:37:15.282009 8 log.go:172] (0xc00269abe0) (3) Data frame handling I0217 12:37:15.282064 8 log.go:172] (0xc00269abe0) (3) Data frame sent I0217 12:37:15.508562 8 log.go:172] (0xc001e5a370) Data frame received for 1 I0217 12:37:15.508882 8 log.go:172] (0xc001e5a370) (0xc00269abe0) Stream removed, broadcasting: 3 I0217 12:37:15.509009 8 log.go:172] (0xc00269ab40) (1) Data frame handling I0217 12:37:15.509086 8 log.go:172] (0xc00269ab40) (1) Data frame sent I0217 12:37:15.509118 8 log.go:172] (0xc001e5a370) (0xc00269ab40) Stream removed, broadcasting: 1 I0217 12:37:15.509393 8 log.go:172] (0xc001e5a370) (0xc002162460) Stream removed, broadcasting: 5 I0217 12:37:15.509594 8 log.go:172] (0xc001e5a370) Go away received I0217 12:37:15.509697 8 log.go:172] (0xc001e5a370) (0xc00269ab40) Stream removed, broadcasting: 1 I0217 12:37:15.509751 8 log.go:172] (0xc001e5a370) (0xc00269abe0) Stream removed, broadcasting: 3 I0217 12:37:15.509761 8 log.go:172] (0xc001e5a370) (0xc002162460) Stream removed, broadcasting: 5 Feb 17 12:37:15.509: INFO: Found all expected endpoints: [netserver-0] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 17 12:37:15.510: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pod-network-test-xx7vf" for this suite. Feb 17 12:37:41.681: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 17 12:37:41.783: INFO: namespace: e2e-tests-pod-network-test-xx7vf, resource: bindings, ignored listing per whitelist Feb 17 12:37:41.862: INFO: namespace e2e-tests-pod-network-test-xx7vf deletion completed in 26.334721385s • [SLOW TEST:66.453 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for node-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 17 12:37:41.863: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Feb 17 12:38:06.243: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Feb 17 12:38:06.262: INFO: Pod pod-with-prestop-exec-hook still exists Feb 17 12:38:08.262: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Feb 17 12:38:08.288: INFO: Pod pod-with-prestop-exec-hook still exists Feb 17 12:38:10.262: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Feb 17 12:38:10.282: INFO: Pod pod-with-prestop-exec-hook still exists Feb 17 12:38:12.262: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Feb 17 12:38:12.277: INFO: Pod pod-with-prestop-exec-hook still exists Feb 17 12:38:14.262: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Feb 17 12:38:14.276: INFO: Pod pod-with-prestop-exec-hook still exists Feb 17 12:38:16.262: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Feb 17 12:38:16.281: INFO: Pod pod-with-prestop-exec-hook still exists Feb 17 12:38:18.262: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Feb 17 12:38:18.282: INFO: Pod pod-with-prestop-exec-hook still exists Feb 17 12:38:20.262: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Feb 17 12:38:20.282: INFO: Pod pod-with-prestop-exec-hook still exists Feb 17 12:38:22.262: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Feb 17 12:38:22.292: INFO: Pod pod-with-prestop-exec-hook still exists Feb 17 12:38:24.262: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Feb 17 12:38:24.277: INFO: Pod pod-with-prestop-exec-hook still exists Feb 17 12:38:26.262: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Feb 17 12:38:26.330: INFO: Pod pod-with-prestop-exec-hook still exists Feb 17 12:38:28.262: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Feb 17 12:38:28.277: INFO: Pod pod-with-prestop-exec-hook still exists Feb 17 12:38:30.262: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Feb 17 12:38:30.338: INFO: Pod pod-with-prestop-exec-hook still exists Feb 17 12:38:32.262: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Feb 17 12:38:32.304: INFO: Pod pod-with-prestop-exec-hook still exists Feb 17 12:38:34.262: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Feb 17 12:38:34.338: INFO: Pod pod-with-prestop-exec-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 17 12:38:34.372: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-tvn7k" for this suite. Feb 17 12:38:56.470: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 17 12:38:56.617: INFO: namespace: e2e-tests-container-lifecycle-hook-tvn7k, resource: bindings, ignored listing per whitelist Feb 17 12:38:56.670: INFO: namespace e2e-tests-container-lifecycle-hook-tvn7k deletion completed in 22.289265004s • [SLOW TEST:74.807 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40 should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 17 12:38:56.671: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with configmap pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod pod-subpath-test-configmap-m2xh STEP: Creating a pod to test atomic-volume-subpath Feb 17 12:38:57.028: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-m2xh" in namespace "e2e-tests-subpath-w8ctp" to be "success or failure" Feb 17 12:38:57.051: INFO: Pod "pod-subpath-test-configmap-m2xh": Phase="Pending", Reason="", readiness=false. Elapsed: 22.80519ms Feb 17 12:38:59.501: INFO: Pod "pod-subpath-test-configmap-m2xh": Phase="Pending", Reason="", readiness=false. Elapsed: 2.473302066s Feb 17 12:39:01.547: INFO: Pod "pod-subpath-test-configmap-m2xh": Phase="Pending", Reason="", readiness=false. Elapsed: 4.518676745s Feb 17 12:39:03.685: INFO: Pod "pod-subpath-test-configmap-m2xh": Phase="Pending", Reason="", readiness=false. Elapsed: 6.657286216s Feb 17 12:39:05.701: INFO: Pod "pod-subpath-test-configmap-m2xh": Phase="Pending", Reason="", readiness=false. Elapsed: 8.672719029s Feb 17 12:39:07.994: INFO: Pod "pod-subpath-test-configmap-m2xh": Phase="Pending", Reason="", readiness=false. Elapsed: 10.966520896s Feb 17 12:39:10.201: INFO: Pod "pod-subpath-test-configmap-m2xh": Phase="Pending", Reason="", readiness=false. Elapsed: 13.173414278s Feb 17 12:39:12.366: INFO: Pod "pod-subpath-test-configmap-m2xh": Phase="Running", Reason="", readiness=true. Elapsed: 15.338224175s Feb 17 12:39:14.381: INFO: Pod "pod-subpath-test-configmap-m2xh": Phase="Running", Reason="", readiness=false. Elapsed: 17.353129972s Feb 17 12:39:16.394: INFO: Pod "pod-subpath-test-configmap-m2xh": Phase="Running", Reason="", readiness=false. Elapsed: 19.366116893s Feb 17 12:39:18.401: INFO: Pod "pod-subpath-test-configmap-m2xh": Phase="Running", Reason="", readiness=false. Elapsed: 21.373062082s Feb 17 12:39:20.417: INFO: Pod "pod-subpath-test-configmap-m2xh": Phase="Running", Reason="", readiness=false. Elapsed: 23.388813314s Feb 17 12:39:22.443: INFO: Pod "pod-subpath-test-configmap-m2xh": Phase="Running", Reason="", readiness=false. Elapsed: 25.415474469s Feb 17 12:39:24.482: INFO: Pod "pod-subpath-test-configmap-m2xh": Phase="Running", Reason="", readiness=false. Elapsed: 27.454348234s Feb 17 12:39:26.512: INFO: Pod "pod-subpath-test-configmap-m2xh": Phase="Running", Reason="", readiness=false. Elapsed: 29.483851511s Feb 17 12:39:28.580: INFO: Pod "pod-subpath-test-configmap-m2xh": Phase="Running", Reason="", readiness=false. Elapsed: 31.551914012s Feb 17 12:39:30.608: INFO: Pod "pod-subpath-test-configmap-m2xh": Phase="Running", Reason="", readiness=false. Elapsed: 33.579961353s Feb 17 12:39:32.630: INFO: Pod "pod-subpath-test-configmap-m2xh": Phase="Succeeded", Reason="", readiness=false. Elapsed: 35.601821508s STEP: Saw pod success Feb 17 12:39:32.630: INFO: Pod "pod-subpath-test-configmap-m2xh" satisfied condition "success or failure" Feb 17 12:39:32.640: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-subpath-test-configmap-m2xh container test-container-subpath-configmap-m2xh: STEP: delete the pod Feb 17 12:39:33.372: INFO: Waiting for pod pod-subpath-test-configmap-m2xh to disappear Feb 17 12:39:33.747: INFO: Pod pod-subpath-test-configmap-m2xh no longer exists STEP: Deleting pod pod-subpath-test-configmap-m2xh Feb 17 12:39:33.747: INFO: Deleting pod "pod-subpath-test-configmap-m2xh" in namespace "e2e-tests-subpath-w8ctp" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 17 12:39:33.768: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-subpath-w8ctp" for this suite. Feb 17 12:39:41.905: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 17 12:39:42.158: INFO: namespace: e2e-tests-subpath-w8ctp, resource: bindings, ignored listing per whitelist Feb 17 12:39:42.223: INFO: namespace e2e-tests-subpath-w8ctp deletion completed in 8.433936686s • [SLOW TEST:45.552 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with configmap pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [sig-network] Services should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 17 12:39:42.223: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:85 [It] should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating service endpoint-test2 in namespace e2e-tests-services-57fvl STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-57fvl to expose endpoints map[] Feb 17 12:39:42.873: INFO: Get endpoints failed (103.57293ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found Feb 17 12:39:43.897: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-57fvl exposes endpoints map[] (1.127793566s elapsed) STEP: Creating pod pod1 in namespace e2e-tests-services-57fvl STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-57fvl to expose endpoints map[pod1:[80]] Feb 17 12:39:48.495: INFO: Unexpected endpoints: found map[], expected map[pod1:[80]] (4.500075778s elapsed, will retry) Feb 17 12:39:51.728: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-57fvl exposes endpoints map[pod1:[80]] (7.73243446s elapsed) STEP: Creating pod pod2 in namespace e2e-tests-services-57fvl STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-57fvl to expose endpoints map[pod1:[80] pod2:[80]] Feb 17 12:39:57.750: INFO: Unexpected endpoints: found map[92d3fee6-5182-11ea-a994-fa163e34d433:[80]], expected map[pod1:[80] pod2:[80]] (6.010928756s elapsed, will retry) Feb 17 12:40:01.026: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-57fvl exposes endpoints map[pod1:[80] pod2:[80]] (9.287078913s elapsed) STEP: Deleting pod pod1 in namespace e2e-tests-services-57fvl STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-57fvl to expose endpoints map[pod2:[80]] Feb 17 12:40:01.244: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-57fvl exposes endpoints map[pod2:[80]] (196.086891ms elapsed) STEP: Deleting pod pod2 in namespace e2e-tests-services-57fvl STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-57fvl to expose endpoints map[] Feb 17 12:40:01.300: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-57fvl exposes endpoints map[] (47.424779ms elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 17 12:40:01.418: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-services-57fvl" for this suite. Feb 17 12:40:26.308: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 17 12:40:26.420: INFO: namespace: e2e-tests-services-57fvl, resource: bindings, ignored listing per whitelist Feb 17 12:40:26.517: INFO: namespace e2e-tests-services-57fvl deletion completed in 25.069624437s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:90 • [SLOW TEST:44.295 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 17 12:40:26.519: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward api env vars Feb 17 12:40:26.736: INFO: Waiting up to 5m0s for pod "downward-api-ac56fed5-5182-11ea-a180-0242ac110008" in namespace "e2e-tests-downward-api-9s4jk" to be "success or failure" Feb 17 12:40:26.766: INFO: Pod "downward-api-ac56fed5-5182-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 29.889016ms Feb 17 12:40:29.499: INFO: Pod "downward-api-ac56fed5-5182-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.763419284s Feb 17 12:40:31.567: INFO: Pod "downward-api-ac56fed5-5182-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.831067756s Feb 17 12:40:33.656: INFO: Pod "downward-api-ac56fed5-5182-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.920757502s Feb 17 12:40:35.680: INFO: Pod "downward-api-ac56fed5-5182-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.943806623s Feb 17 12:40:37.701: INFO: Pod "downward-api-ac56fed5-5182-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 10.965580317s Feb 17 12:40:40.004: INFO: Pod "downward-api-ac56fed5-5182-11ea-a180-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.268093816s STEP: Saw pod success Feb 17 12:40:40.004: INFO: Pod "downward-api-ac56fed5-5182-11ea-a180-0242ac110008" satisfied condition "success or failure" Feb 17 12:40:40.013: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downward-api-ac56fed5-5182-11ea-a180-0242ac110008 container dapi-container: STEP: delete the pod Feb 17 12:40:40.239: INFO: Waiting for pod downward-api-ac56fed5-5182-11ea-a180-0242ac110008 to disappear Feb 17 12:40:40.267: INFO: Pod downward-api-ac56fed5-5182-11ea-a180-0242ac110008 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 17 12:40:40.267: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-9s4jk" for this suite. Feb 17 12:40:46.354: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 17 12:40:46.546: INFO: namespace: e2e-tests-downward-api-9s4jk, resource: bindings, ignored listing per whitelist Feb 17 12:40:46.586: INFO: namespace e2e-tests-downward-api-9s4jk deletion completed in 6.265141086s • [SLOW TEST:20.067 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38 should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 17 12:40:46.587: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name s-test-opt-del-b84ce817-5182-11ea-a180-0242ac110008 STEP: Creating secret with name s-test-opt-upd-b84ce9a4-5182-11ea-a180-0242ac110008 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-b84ce817-5182-11ea-a180-0242ac110008 STEP: Updating secret s-test-opt-upd-b84ce9a4-5182-11ea-a180-0242ac110008 STEP: Creating secret with name s-test-opt-create-b84ce9d7-5182-11ea-a180-0242ac110008 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 17 12:41:07.283: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-tnmts" for this suite. Feb 17 12:41:31.353: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 17 12:41:31.424: INFO: namespace: e2e-tests-secrets-tnmts, resource: bindings, ignored listing per whitelist Feb 17 12:41:31.499: INFO: namespace e2e-tests-secrets-tnmts deletion completed in 24.209805092s • [SLOW TEST:44.913 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 17 12:41:31.500: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on default medium should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir volume type on node default medium Feb 17 12:41:31.764: INFO: Waiting up to 5m0s for pod "pod-d318e788-5182-11ea-a180-0242ac110008" in namespace "e2e-tests-emptydir-p57rd" to be "success or failure" Feb 17 12:41:31.804: INFO: Pod "pod-d318e788-5182-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 39.682666ms Feb 17 12:41:33.814: INFO: Pod "pod-d318e788-5182-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.049678867s Feb 17 12:41:35.824: INFO: Pod "pod-d318e788-5182-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.059806672s Feb 17 12:41:38.542: INFO: Pod "pod-d318e788-5182-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.777731341s Feb 17 12:41:40.817: INFO: Pod "pod-d318e788-5182-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 9.052896701s Feb 17 12:41:42.855: INFO: Pod "pod-d318e788-5182-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 11.091478526s Feb 17 12:41:44.922: INFO: Pod "pod-d318e788-5182-11ea-a180-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.157573088s STEP: Saw pod success Feb 17 12:41:44.922: INFO: Pod "pod-d318e788-5182-11ea-a180-0242ac110008" satisfied condition "success or failure" Feb 17 12:41:44.965: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-d318e788-5182-11ea-a180-0242ac110008 container test-container: STEP: delete the pod Feb 17 12:41:45.154: INFO: Waiting for pod pod-d318e788-5182-11ea-a180-0242ac110008 to disappear Feb 17 12:41:45.177: INFO: Pod pod-d318e788-5182-11ea-a180-0242ac110008 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 17 12:41:45.177: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-p57rd" for this suite. Feb 17 12:41:51.283: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 17 12:41:51.309: INFO: namespace: e2e-tests-emptydir-p57rd, resource: bindings, ignored listing per whitelist Feb 17 12:41:51.524: INFO: namespace e2e-tests-emptydir-p57rd deletion completed in 6.332854208s • [SLOW TEST:20.024 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 volume on default medium should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl version should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 17 12:41:51.525: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Feb 17 12:41:51.895: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version' Feb 17 12:41:52.126: INFO: stderr: "" Feb 17 12:41:52.127: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"13\", GitVersion:\"v1.13.12\", GitCommit:\"a8b52209ee172232b6db7a6e0ce2adc77458829f\", GitTreeState:\"clean\", BuildDate:\"2019-12-22T15:53:48Z\", GoVersion:\"go1.11.13\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"13\", GitVersion:\"v1.13.8\", GitCommit:\"0c6d31a99f81476dfc9871ba3cf3f597bec29b58\", GitTreeState:\"clean\", BuildDate:\"2019-07-08T08:38:54Z\", GoVersion:\"go1.11.5\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 17 12:41:52.127: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-2nmtx" for this suite. Feb 17 12:41:58.252: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 17 12:41:58.367: INFO: namespace: e2e-tests-kubectl-2nmtx, resource: bindings, ignored listing per whitelist Feb 17 12:41:58.426: INFO: namespace e2e-tests-kubectl-2nmtx deletion completed in 6.271835905s • [SLOW TEST:6.902 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl version /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 17 12:41:58.427: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102 [It] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Feb 17 12:41:58.803: INFO: Number of nodes with available pods: 0 Feb 17 12:41:58.804: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 17 12:41:59.828: INFO: Number of nodes with available pods: 0 Feb 17 12:41:59.828: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 17 12:42:00.839: INFO: Number of nodes with available pods: 0 Feb 17 12:42:00.839: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 17 12:42:02.238: INFO: Number of nodes with available pods: 0 Feb 17 12:42:02.238: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 17 12:42:02.834: INFO: Number of nodes with available pods: 0 Feb 17 12:42:02.834: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 17 12:42:03.846: INFO: Number of nodes with available pods: 0 Feb 17 12:42:03.846: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 17 12:42:05.159: INFO: Number of nodes with available pods: 0 Feb 17 12:42:05.159: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 17 12:42:06.733: INFO: Number of nodes with available pods: 0 Feb 17 12:42:06.733: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 17 12:42:06.837: INFO: Number of nodes with available pods: 0 Feb 17 12:42:06.837: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 17 12:42:07.840: INFO: Number of nodes with available pods: 0 Feb 17 12:42:07.840: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 17 12:42:08.819: INFO: Number of nodes with available pods: 1 Feb 17 12:42:08.819: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. Feb 17 12:42:08.952: INFO: Number of nodes with available pods: 1 Feb 17 12:42:08.952: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Wait for the failed daemon pod to be completely deleted. [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-l9s8g, will wait for the garbage collector to delete the pods Feb 17 12:42:10.083: INFO: Deleting DaemonSet.extensions daemon-set took: 23.271951ms Feb 17 12:42:11.483: INFO: Terminating DaemonSet.extensions daemon-set pods took: 1.40055448s Feb 17 12:42:16.800: INFO: Number of nodes with available pods: 0 Feb 17 12:42:16.800: INFO: Number of running nodes: 0, number of available pods: 0 Feb 17 12:42:16.804: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-l9s8g/daemonsets","resourceVersion":"21980537"},"items":null} Feb 17 12:42:16.806: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-l9s8g/pods","resourceVersion":"21980537"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 17 12:42:16.816: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-daemonsets-l9s8g" for this suite. Feb 17 12:42:24.850: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 17 12:42:24.978: INFO: namespace: e2e-tests-daemonsets-l9s8g, resource: bindings, ignored listing per whitelist Feb 17 12:42:25.036: INFO: namespace e2e-tests-daemonsets-l9s8g deletion completed in 8.214640462s • [SLOW TEST:26.609 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 17 12:42:25.036: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts STEP: Waiting for a default service account to be provisioned in namespace [It] should test kubelet managed /etc/hosts file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Setting up the test STEP: Creating hostNetwork=false pod STEP: Creating hostNetwork=true pod STEP: Running the test STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false Feb 17 12:42:57.723: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-6r5zj PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 17 12:42:57.723: INFO: >>> kubeConfig: /root/.kube/config I0217 12:42:57.851293 8 log.go:172] (0xc001e5a370) (0xc002110780) Create stream I0217 12:42:57.851808 8 log.go:172] (0xc001e5a370) (0xc002110780) Stream added, broadcasting: 1 I0217 12:42:57.867444 8 log.go:172] (0xc001e5a370) Reply frame received for 1 I0217 12:42:57.867652 8 log.go:172] (0xc001e5a370) (0xc002358000) Create stream I0217 12:42:57.867671 8 log.go:172] (0xc001e5a370) (0xc002358000) Stream added, broadcasting: 3 I0217 12:42:57.870385 8 log.go:172] (0xc001e5a370) Reply frame received for 3 I0217 12:42:57.870476 8 log.go:172] (0xc001e5a370) (0xc002110820) Create stream I0217 12:42:57.870528 8 log.go:172] (0xc001e5a370) (0xc002110820) Stream added, broadcasting: 5 I0217 12:42:57.872969 8 log.go:172] (0xc001e5a370) Reply frame received for 5 I0217 12:42:58.148839 8 log.go:172] (0xc001e5a370) Data frame received for 3 I0217 12:42:58.148936 8 log.go:172] (0xc002358000) (3) Data frame handling I0217 12:42:58.148961 8 log.go:172] (0xc002358000) (3) Data frame sent I0217 12:42:58.306035 8 log.go:172] (0xc001e5a370) (0xc002358000) Stream removed, broadcasting: 3 I0217 12:42:58.306221 8 log.go:172] (0xc001e5a370) Data frame received for 1 I0217 12:42:58.306233 8 log.go:172] (0xc002110780) (1) Data frame handling I0217 12:42:58.306249 8 log.go:172] (0xc002110780) (1) Data frame sent I0217 12:42:58.306259 8 log.go:172] (0xc001e5a370) (0xc002110780) Stream removed, broadcasting: 1 I0217 12:42:58.306468 8 log.go:172] (0xc001e5a370) (0xc002110820) Stream removed, broadcasting: 5 I0217 12:42:58.306520 8 log.go:172] (0xc001e5a370) (0xc002110780) Stream removed, broadcasting: 1 I0217 12:42:58.306575 8 log.go:172] (0xc001e5a370) (0xc002358000) Stream removed, broadcasting: 3 I0217 12:42:58.306605 8 log.go:172] (0xc001e5a370) (0xc002110820) Stream removed, broadcasting: 5 Feb 17 12:42:58.307: INFO: Exec stderr: "" Feb 17 12:42:58.307: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-6r5zj PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 17 12:42:58.307: INFO: >>> kubeConfig: /root/.kube/config I0217 12:42:58.312816 8 log.go:172] (0xc001e5a370) Go away received I0217 12:42:58.448708 8 log.go:172] (0xc0019f60b0) (0xc0023a01e0) Create stream I0217 12:42:58.448785 8 log.go:172] (0xc0019f60b0) (0xc0023a01e0) Stream added, broadcasting: 1 I0217 12:42:58.504621 8 log.go:172] (0xc0019f60b0) Reply frame received for 1 I0217 12:42:58.504686 8 log.go:172] (0xc0019f60b0) (0xc0023a0280) Create stream I0217 12:42:58.504700 8 log.go:172] (0xc0019f60b0) (0xc0023a0280) Stream added, broadcasting: 3 I0217 12:42:58.508258 8 log.go:172] (0xc0019f60b0) Reply frame received for 3 I0217 12:42:58.508307 8 log.go:172] (0xc0019f60b0) (0xc0023a03c0) Create stream I0217 12:42:58.508324 8 log.go:172] (0xc0019f60b0) (0xc0023a03c0) Stream added, broadcasting: 5 I0217 12:42:58.511969 8 log.go:172] (0xc0019f60b0) Reply frame received for 5 I0217 12:42:58.817161 8 log.go:172] (0xc0019f60b0) Data frame received for 3 I0217 12:42:58.817239 8 log.go:172] (0xc0023a0280) (3) Data frame handling I0217 12:42:58.817261 8 log.go:172] (0xc0023a0280) (3) Data frame sent I0217 12:42:59.007686 8 log.go:172] (0xc0019f60b0) (0xc0023a0280) Stream removed, broadcasting: 3 I0217 12:42:59.007916 8 log.go:172] (0xc0019f60b0) Data frame received for 1 I0217 12:42:59.007967 8 log.go:172] (0xc0023a01e0) (1) Data frame handling I0217 12:42:59.007994 8 log.go:172] (0xc0023a01e0) (1) Data frame sent I0217 12:42:59.008036 8 log.go:172] (0xc0019f60b0) (0xc0023a03c0) Stream removed, broadcasting: 5 I0217 12:42:59.008082 8 log.go:172] (0xc0019f60b0) (0xc0023a01e0) Stream removed, broadcasting: 1 I0217 12:42:59.008098 8 log.go:172] (0xc0019f60b0) Go away received I0217 12:42:59.008379 8 log.go:172] (0xc0019f60b0) (0xc0023a01e0) Stream removed, broadcasting: 1 I0217 12:42:59.008390 8 log.go:172] (0xc0019f60b0) (0xc0023a0280) Stream removed, broadcasting: 3 I0217 12:42:59.008398 8 log.go:172] (0xc0019f60b0) (0xc0023a03c0) Stream removed, broadcasting: 5 Feb 17 12:42:59.008: INFO: Exec stderr: "" Feb 17 12:42:59.008: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-6r5zj PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 17 12:42:59.008: INFO: >>> kubeConfig: /root/.kube/config I0217 12:42:59.093259 8 log.go:172] (0xc0019f6630) (0xc0023a0640) Create stream I0217 12:42:59.093347 8 log.go:172] (0xc0019f6630) (0xc0023a0640) Stream added, broadcasting: 1 I0217 12:42:59.100538 8 log.go:172] (0xc0019f6630) Reply frame received for 1 I0217 12:42:59.100572 8 log.go:172] (0xc0019f6630) (0xc001eda0a0) Create stream I0217 12:42:59.100582 8 log.go:172] (0xc0019f6630) (0xc001eda0a0) Stream added, broadcasting: 3 I0217 12:42:59.102006 8 log.go:172] (0xc0019f6630) Reply frame received for 3 I0217 12:42:59.102032 8 log.go:172] (0xc0019f6630) (0xc0023a0820) Create stream I0217 12:42:59.102040 8 log.go:172] (0xc0019f6630) (0xc0023a0820) Stream added, broadcasting: 5 I0217 12:42:59.103832 8 log.go:172] (0xc0019f6630) Reply frame received for 5 I0217 12:42:59.254207 8 log.go:172] (0xc0019f6630) Data frame received for 3 I0217 12:42:59.254325 8 log.go:172] (0xc001eda0a0) (3) Data frame handling I0217 12:42:59.254364 8 log.go:172] (0xc001eda0a0) (3) Data frame sent I0217 12:42:59.382738 8 log.go:172] (0xc0019f6630) Data frame received for 1 I0217 12:42:59.382836 8 log.go:172] (0xc0019f6630) (0xc001eda0a0) Stream removed, broadcasting: 3 I0217 12:42:59.382922 8 log.go:172] (0xc0023a0640) (1) Data frame handling I0217 12:42:59.382953 8 log.go:172] (0xc0023a0640) (1) Data frame sent I0217 12:42:59.383162 8 log.go:172] (0xc0019f6630) (0xc0023a0820) Stream removed, broadcasting: 5 I0217 12:42:59.383243 8 log.go:172] (0xc0019f6630) (0xc0023a0640) Stream removed, broadcasting: 1 I0217 12:42:59.383268 8 log.go:172] (0xc0019f6630) Go away received I0217 12:42:59.383866 8 log.go:172] (0xc0019f6630) (0xc0023a0640) Stream removed, broadcasting: 1 I0217 12:42:59.383899 8 log.go:172] (0xc0019f6630) (0xc001eda0a0) Stream removed, broadcasting: 3 I0217 12:42:59.383916 8 log.go:172] (0xc0019f6630) (0xc0023a0820) Stream removed, broadcasting: 5 Feb 17 12:42:59.383: INFO: Exec stderr: "" Feb 17 12:42:59.384: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-6r5zj PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 17 12:42:59.384: INFO: >>> kubeConfig: /root/.kube/config I0217 12:42:59.456337 8 log.go:172] (0xc000aff600) (0xc001eda320) Create stream I0217 12:42:59.456447 8 log.go:172] (0xc000aff600) (0xc001eda320) Stream added, broadcasting: 1 I0217 12:42:59.461293 8 log.go:172] (0xc000aff600) Reply frame received for 1 I0217 12:42:59.461371 8 log.go:172] (0xc000aff600) (0xc002300000) Create stream I0217 12:42:59.461404 8 log.go:172] (0xc000aff600) (0xc002300000) Stream added, broadcasting: 3 I0217 12:42:59.462775 8 log.go:172] (0xc000aff600) Reply frame received for 3 I0217 12:42:59.462824 8 log.go:172] (0xc000aff600) (0xc002358140) Create stream I0217 12:42:59.462837 8 log.go:172] (0xc000aff600) (0xc002358140) Stream added, broadcasting: 5 I0217 12:42:59.463827 8 log.go:172] (0xc000aff600) Reply frame received for 5 I0217 12:42:59.605438 8 log.go:172] (0xc000aff600) Data frame received for 3 I0217 12:42:59.605508 8 log.go:172] (0xc002300000) (3) Data frame handling I0217 12:42:59.605524 8 log.go:172] (0xc002300000) (3) Data frame sent I0217 12:42:59.733785 8 log.go:172] (0xc000aff600) Data frame received for 1 I0217 12:42:59.733892 8 log.go:172] (0xc000aff600) (0xc002300000) Stream removed, broadcasting: 3 I0217 12:42:59.733938 8 log.go:172] (0xc001eda320) (1) Data frame handling I0217 12:42:59.733965 8 log.go:172] (0xc001eda320) (1) Data frame sent I0217 12:42:59.733978 8 log.go:172] (0xc000aff600) (0xc001eda320) Stream removed, broadcasting: 1 I0217 12:42:59.734155 8 log.go:172] (0xc000aff600) (0xc002358140) Stream removed, broadcasting: 5 I0217 12:42:59.734218 8 log.go:172] (0xc000aff600) Go away received I0217 12:42:59.734296 8 log.go:172] (0xc000aff600) (0xc001eda320) Stream removed, broadcasting: 1 I0217 12:42:59.734327 8 log.go:172] (0xc000aff600) (0xc002300000) Stream removed, broadcasting: 3 I0217 12:42:59.734347 8 log.go:172] (0xc000aff600) (0xc002358140) Stream removed, broadcasting: 5 Feb 17 12:42:59.734: INFO: Exec stderr: "" STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount Feb 17 12:42:59.734: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-6r5zj PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 17 12:42:59.734: INFO: >>> kubeConfig: /root/.kube/config I0217 12:42:59.841376 8 log.go:172] (0xc001bb82c0) (0xc002300280) Create stream I0217 12:42:59.841681 8 log.go:172] (0xc001bb82c0) (0xc002300280) Stream added, broadcasting: 1 I0217 12:42:59.845345 8 log.go:172] (0xc001bb82c0) Reply frame received for 1 I0217 12:42:59.845383 8 log.go:172] (0xc001bb82c0) (0xc0027c2320) Create stream I0217 12:42:59.845393 8 log.go:172] (0xc001bb82c0) (0xc0027c2320) Stream added, broadcasting: 3 I0217 12:42:59.846414 8 log.go:172] (0xc001bb82c0) Reply frame received for 3 I0217 12:42:59.846458 8 log.go:172] (0xc001bb82c0) (0xc0021108c0) Create stream I0217 12:42:59.846479 8 log.go:172] (0xc001bb82c0) (0xc0021108c0) Stream added, broadcasting: 5 I0217 12:42:59.847510 8 log.go:172] (0xc001bb82c0) Reply frame received for 5 I0217 12:42:59.977802 8 log.go:172] (0xc001bb82c0) Data frame received for 3 I0217 12:42:59.977959 8 log.go:172] (0xc0027c2320) (3) Data frame handling I0217 12:42:59.977992 8 log.go:172] (0xc0027c2320) (3) Data frame sent I0217 12:43:00.158221 8 log.go:172] (0xc001bb82c0) Data frame received for 1 I0217 12:43:00.158279 8 log.go:172] (0xc002300280) (1) Data frame handling I0217 12:43:00.158296 8 log.go:172] (0xc002300280) (1) Data frame sent I0217 12:43:00.158306 8 log.go:172] (0xc001bb82c0) (0xc002300280) Stream removed, broadcasting: 1 I0217 12:43:00.158344 8 log.go:172] (0xc001bb82c0) (0xc0027c2320) Stream removed, broadcasting: 3 I0217 12:43:00.158800 8 log.go:172] (0xc001bb82c0) (0xc0021108c0) Stream removed, broadcasting: 5 I0217 12:43:00.158846 8 log.go:172] (0xc001bb82c0) (0xc002300280) Stream removed, broadcasting: 1 I0217 12:43:00.158866 8 log.go:172] (0xc001bb82c0) (0xc0027c2320) Stream removed, broadcasting: 3 I0217 12:43:00.158906 8 log.go:172] (0xc001bb82c0) (0xc0021108c0) Stream removed, broadcasting: 5 I0217 12:43:00.159249 8 log.go:172] (0xc001bb82c0) Go away received Feb 17 12:43:00.159: INFO: Exec stderr: "" Feb 17 12:43:00.159: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-6r5zj PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 17 12:43:00.159: INFO: >>> kubeConfig: /root/.kube/config I0217 12:43:00.250281 8 log.go:172] (0xc001a1c2c0) (0xc0027c25a0) Create stream I0217 12:43:00.250432 8 log.go:172] (0xc001a1c2c0) (0xc0027c25a0) Stream added, broadcasting: 1 I0217 12:43:00.256227 8 log.go:172] (0xc001a1c2c0) Reply frame received for 1 I0217 12:43:00.256312 8 log.go:172] (0xc001a1c2c0) (0xc002300320) Create stream I0217 12:43:00.256333 8 log.go:172] (0xc001a1c2c0) (0xc002300320) Stream added, broadcasting: 3 I0217 12:43:00.258441 8 log.go:172] (0xc001a1c2c0) Reply frame received for 3 I0217 12:43:00.258611 8 log.go:172] (0xc001a1c2c0) (0xc0023003c0) Create stream I0217 12:43:00.258680 8 log.go:172] (0xc001a1c2c0) (0xc0023003c0) Stream added, broadcasting: 5 I0217 12:43:00.260808 8 log.go:172] (0xc001a1c2c0) Reply frame received for 5 I0217 12:43:00.390063 8 log.go:172] (0xc001a1c2c0) Data frame received for 3 I0217 12:43:00.390190 8 log.go:172] (0xc002300320) (3) Data frame handling I0217 12:43:00.390207 8 log.go:172] (0xc002300320) (3) Data frame sent I0217 12:43:00.587825 8 log.go:172] (0xc001a1c2c0) Data frame received for 1 I0217 12:43:00.587972 8 log.go:172] (0xc0027c25a0) (1) Data frame handling I0217 12:43:00.588009 8 log.go:172] (0xc0027c25a0) (1) Data frame sent I0217 12:43:00.588026 8 log.go:172] (0xc001a1c2c0) (0xc0027c25a0) Stream removed, broadcasting: 1 I0217 12:43:00.589033 8 log.go:172] (0xc001a1c2c0) (0xc002300320) Stream removed, broadcasting: 3 I0217 12:43:00.589126 8 log.go:172] (0xc001a1c2c0) (0xc0023003c0) Stream removed, broadcasting: 5 I0217 12:43:00.589199 8 log.go:172] (0xc001a1c2c0) Go away received I0217 12:43:00.589250 8 log.go:172] (0xc001a1c2c0) (0xc0027c25a0) Stream removed, broadcasting: 1 I0217 12:43:00.589294 8 log.go:172] (0xc001a1c2c0) (0xc002300320) Stream removed, broadcasting: 3 I0217 12:43:00.589309 8 log.go:172] (0xc001a1c2c0) (0xc0023003c0) Stream removed, broadcasting: 5 Feb 17 12:43:00.589: INFO: Exec stderr: "" STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true Feb 17 12:43:00.589: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-6r5zj PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 17 12:43:00.589: INFO: >>> kubeConfig: /root/.kube/config I0217 12:43:00.717491 8 log.go:172] (0xc001e5a790) (0xc002110960) Create stream I0217 12:43:00.717616 8 log.go:172] (0xc001e5a790) (0xc002110960) Stream added, broadcasting: 1 I0217 12:43:00.721973 8 log.go:172] (0xc001e5a790) Reply frame received for 1 I0217 12:43:00.721994 8 log.go:172] (0xc001e5a790) (0xc002110a00) Create stream I0217 12:43:00.722000 8 log.go:172] (0xc001e5a790) (0xc002110a00) Stream added, broadcasting: 3 I0217 12:43:00.725180 8 log.go:172] (0xc001e5a790) Reply frame received for 3 I0217 12:43:00.725219 8 log.go:172] (0xc001e5a790) (0xc002300460) Create stream I0217 12:43:00.725232 8 log.go:172] (0xc001e5a790) (0xc002300460) Stream added, broadcasting: 5 I0217 12:43:00.726210 8 log.go:172] (0xc001e5a790) Reply frame received for 5 I0217 12:43:00.830139 8 log.go:172] (0xc001e5a790) Data frame received for 3 I0217 12:43:00.830300 8 log.go:172] (0xc002110a00) (3) Data frame handling I0217 12:43:00.830339 8 log.go:172] (0xc002110a00) (3) Data frame sent I0217 12:43:00.960435 8 log.go:172] (0xc001e5a790) (0xc002110a00) Stream removed, broadcasting: 3 I0217 12:43:00.960726 8 log.go:172] (0xc001e5a790) Data frame received for 1 I0217 12:43:00.960770 8 log.go:172] (0xc002110960) (1) Data frame handling I0217 12:43:00.960782 8 log.go:172] (0xc002110960) (1) Data frame sent I0217 12:43:00.960791 8 log.go:172] (0xc001e5a790) (0xc002110960) Stream removed, broadcasting: 1 I0217 12:43:00.960870 8 log.go:172] (0xc001e5a790) (0xc002300460) Stream removed, broadcasting: 5 I0217 12:43:00.960988 8 log.go:172] (0xc001e5a790) Go away received I0217 12:43:00.961078 8 log.go:172] (0xc001e5a790) (0xc002110960) Stream removed, broadcasting: 1 I0217 12:43:00.961087 8 log.go:172] (0xc001e5a790) (0xc002110a00) Stream removed, broadcasting: 3 I0217 12:43:00.961096 8 log.go:172] (0xc001e5a790) (0xc002300460) Stream removed, broadcasting: 5 Feb 17 12:43:00.961: INFO: Exec stderr: "" Feb 17 12:43:00.961: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-6r5zj PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 17 12:43:00.961: INFO: >>> kubeConfig: /root/.kube/config I0217 12:43:01.032324 8 log.go:172] (0xc001bb8790) (0xc0023006e0) Create stream I0217 12:43:01.032458 8 log.go:172] (0xc001bb8790) (0xc0023006e0) Stream added, broadcasting: 1 I0217 12:43:01.035811 8 log.go:172] (0xc001bb8790) Reply frame received for 1 I0217 12:43:01.035835 8 log.go:172] (0xc001bb8790) (0xc001eda3c0) Create stream I0217 12:43:01.035841 8 log.go:172] (0xc001bb8790) (0xc001eda3c0) Stream added, broadcasting: 3 I0217 12:43:01.036717 8 log.go:172] (0xc001bb8790) Reply frame received for 3 I0217 12:43:01.036752 8 log.go:172] (0xc001bb8790) (0xc0023581e0) Create stream I0217 12:43:01.036763 8 log.go:172] (0xc001bb8790) (0xc0023581e0) Stream added, broadcasting: 5 I0217 12:43:01.037889 8 log.go:172] (0xc001bb8790) Reply frame received for 5 I0217 12:43:01.225977 8 log.go:172] (0xc001bb8790) Data frame received for 3 I0217 12:43:01.226082 8 log.go:172] (0xc001eda3c0) (3) Data frame handling I0217 12:43:01.226110 8 log.go:172] (0xc001eda3c0) (3) Data frame sent I0217 12:43:01.323684 8 log.go:172] (0xc001bb8790) (0xc001eda3c0) Stream removed, broadcasting: 3 I0217 12:43:01.323839 8 log.go:172] (0xc001bb8790) Data frame received for 1 I0217 12:43:01.323853 8 log.go:172] (0xc0023006e0) (1) Data frame handling I0217 12:43:01.323863 8 log.go:172] (0xc0023006e0) (1) Data frame sent I0217 12:43:01.323871 8 log.go:172] (0xc001bb8790) (0xc0023006e0) Stream removed, broadcasting: 1 I0217 12:43:01.324000 8 log.go:172] (0xc001bb8790) (0xc0023581e0) Stream removed, broadcasting: 5 I0217 12:43:01.324039 8 log.go:172] (0xc001bb8790) (0xc0023006e0) Stream removed, broadcasting: 1 I0217 12:43:01.324050 8 log.go:172] (0xc001bb8790) (0xc001eda3c0) Stream removed, broadcasting: 3 I0217 12:43:01.324059 8 log.go:172] (0xc001bb8790) (0xc0023581e0) Stream removed, broadcasting: 5 Feb 17 12:43:01.324: INFO: Exec stderr: "" Feb 17 12:43:01.324: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-6r5zj PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 17 12:43:01.324: INFO: >>> kubeConfig: /root/.kube/config I0217 12:43:01.325033 8 log.go:172] (0xc001bb8790) Go away received I0217 12:43:01.384525 8 log.go:172] (0xc0019f6b00) (0xc0023a0a00) Create stream I0217 12:43:01.384630 8 log.go:172] (0xc0019f6b00) (0xc0023a0a00) Stream added, broadcasting: 1 I0217 12:43:01.389511 8 log.go:172] (0xc0019f6b00) Reply frame received for 1 I0217 12:43:01.389565 8 log.go:172] (0xc0019f6b00) (0xc001eda460) Create stream I0217 12:43:01.389577 8 log.go:172] (0xc0019f6b00) (0xc001eda460) Stream added, broadcasting: 3 I0217 12:43:01.390588 8 log.go:172] (0xc0019f6b00) Reply frame received for 3 I0217 12:43:01.390618 8 log.go:172] (0xc0019f6b00) (0xc001eda500) Create stream I0217 12:43:01.390629 8 log.go:172] (0xc0019f6b00) (0xc001eda500) Stream added, broadcasting: 5 I0217 12:43:01.392206 8 log.go:172] (0xc0019f6b00) Reply frame received for 5 I0217 12:43:01.514893 8 log.go:172] (0xc0019f6b00) Data frame received for 3 I0217 12:43:01.514973 8 log.go:172] (0xc001eda460) (3) Data frame handling I0217 12:43:01.515023 8 log.go:172] (0xc001eda460) (3) Data frame sent I0217 12:43:01.623429 8 log.go:172] (0xc0019f6b00) Data frame received for 1 I0217 12:43:01.623495 8 log.go:172] (0xc0023a0a00) (1) Data frame handling I0217 12:43:01.623523 8 log.go:172] (0xc0023a0a00) (1) Data frame sent I0217 12:43:01.623851 8 log.go:172] (0xc0019f6b00) (0xc0023a0a00) Stream removed, broadcasting: 1 I0217 12:43:01.624203 8 log.go:172] (0xc0019f6b00) (0xc001eda460) Stream removed, broadcasting: 3 I0217 12:43:01.624483 8 log.go:172] (0xc0019f6b00) (0xc001eda500) Stream removed, broadcasting: 5 I0217 12:43:01.624693 8 log.go:172] (0xc0019f6b00) Go away received I0217 12:43:01.624752 8 log.go:172] (0xc0019f6b00) (0xc0023a0a00) Stream removed, broadcasting: 1 I0217 12:43:01.624767 8 log.go:172] (0xc0019f6b00) (0xc001eda460) Stream removed, broadcasting: 3 I0217 12:43:01.624776 8 log.go:172] (0xc0019f6b00) (0xc001eda500) Stream removed, broadcasting: 5 Feb 17 12:43:01.624: INFO: Exec stderr: "" Feb 17 12:43:01.624: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-6r5zj PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 17 12:43:01.625: INFO: >>> kubeConfig: /root/.kube/config I0217 12:43:01.698488 8 log.go:172] (0xc0019f6fd0) (0xc0023a0c80) Create stream I0217 12:43:01.698783 8 log.go:172] (0xc0019f6fd0) (0xc0023a0c80) Stream added, broadcasting: 1 I0217 12:43:01.706645 8 log.go:172] (0xc0019f6fd0) Reply frame received for 1 I0217 12:43:01.706701 8 log.go:172] (0xc0019f6fd0) (0xc0023a0d20) Create stream I0217 12:43:01.706717 8 log.go:172] (0xc0019f6fd0) (0xc0023a0d20) Stream added, broadcasting: 3 I0217 12:43:01.707785 8 log.go:172] (0xc0019f6fd0) Reply frame received for 3 I0217 12:43:01.707820 8 log.go:172] (0xc0019f6fd0) (0xc001eda640) Create stream I0217 12:43:01.707836 8 log.go:172] (0xc0019f6fd0) (0xc001eda640) Stream added, broadcasting: 5 I0217 12:43:01.709002 8 log.go:172] (0xc0019f6fd0) Reply frame received for 5 I0217 12:43:01.853569 8 log.go:172] (0xc0019f6fd0) Data frame received for 3 I0217 12:43:01.853707 8 log.go:172] (0xc0023a0d20) (3) Data frame handling I0217 12:43:01.853730 8 log.go:172] (0xc0023a0d20) (3) Data frame sent I0217 12:43:01.969365 8 log.go:172] (0xc0019f6fd0) (0xc0023a0d20) Stream removed, broadcasting: 3 I0217 12:43:01.969535 8 log.go:172] (0xc0019f6fd0) Data frame received for 1 I0217 12:43:01.969559 8 log.go:172] (0xc0023a0c80) (1) Data frame handling I0217 12:43:01.969585 8 log.go:172] (0xc0023a0c80) (1) Data frame sent I0217 12:43:01.969607 8 log.go:172] (0xc0019f6fd0) (0xc0023a0c80) Stream removed, broadcasting: 1 I0217 12:43:01.969660 8 log.go:172] (0xc0019f6fd0) (0xc001eda640) Stream removed, broadcasting: 5 I0217 12:43:01.969747 8 log.go:172] (0xc0019f6fd0) Go away received I0217 12:43:01.969883 8 log.go:172] (0xc0019f6fd0) (0xc0023a0c80) Stream removed, broadcasting: 1 I0217 12:43:01.969898 8 log.go:172] (0xc0019f6fd0) (0xc0023a0d20) Stream removed, broadcasting: 3 I0217 12:43:01.969906 8 log.go:172] (0xc0019f6fd0) (0xc001eda640) Stream removed, broadcasting: 5 Feb 17 12:43:01.969: INFO: Exec stderr: "" [AfterEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 17 12:43:01.970: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-e2e-kubelet-etc-hosts-6r5zj" for this suite. Feb 17 12:43:46.111: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 17 12:43:46.393: INFO: namespace: e2e-tests-e2e-kubelet-etc-hosts-6r5zj, resource: bindings, ignored listing per whitelist Feb 17 12:43:46.441: INFO: namespace e2e-tests-e2e-kubelet-etc-hosts-6r5zj deletion completed in 44.45778514s • [SLOW TEST:81.405 seconds] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should test kubelet managed /etc/hosts file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 17 12:43:46.442: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: validating cluster-info Feb 17 12:43:46.750: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config cluster-info' Feb 17 12:43:49.644: INFO: stderr: "" Feb 17 12:43:49.645: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.24.4.212:6443\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.24.4.212:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 17 12:43:49.645: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-zcftj" for this suite. Feb 17 12:43:55.770: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 17 12:43:55.833: INFO: namespace: e2e-tests-kubectl-zcftj, resource: bindings, ignored listing per whitelist Feb 17 12:43:56.008: INFO: namespace e2e-tests-kubectl-zcftj deletion completed in 6.340279521s • [SLOW TEST:9.566 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl cluster-info /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 17 12:43:56.008: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 17 12:44:06.343: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubelet-test-klmbj" for this suite. Feb 17 12:44:54.511: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 17 12:44:54.578: INFO: namespace: e2e-tests-kubelet-test-klmbj, resource: bindings, ignored listing per whitelist Feb 17 12:44:54.676: INFO: namespace e2e-tests-kubelet-test-klmbj deletion completed in 48.325695421s • [SLOW TEST:58.668 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when scheduling a busybox command in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:40 should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl expose should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 17 12:44:54.677: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating Redis RC Feb 17 12:44:55.063: INFO: namespace e2e-tests-kubectl-2lrwc Feb 17 12:44:55.063: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-2lrwc' Feb 17 12:44:55.638: INFO: stderr: "" Feb 17 12:44:55.638: INFO: stdout: "replicationcontroller/redis-master created\n" STEP: Waiting for Redis master to start. Feb 17 12:44:56.950: INFO: Selector matched 1 pods for map[app:redis] Feb 17 12:44:56.951: INFO: Found 0 / 1 Feb 17 12:44:57.657: INFO: Selector matched 1 pods for map[app:redis] Feb 17 12:44:57.657: INFO: Found 0 / 1 Feb 17 12:44:58.662: INFO: Selector matched 1 pods for map[app:redis] Feb 17 12:44:58.662: INFO: Found 0 / 1 Feb 17 12:44:59.680: INFO: Selector matched 1 pods for map[app:redis] Feb 17 12:44:59.680: INFO: Found 0 / 1 Feb 17 12:45:00.672: INFO: Selector matched 1 pods for map[app:redis] Feb 17 12:45:00.672: INFO: Found 0 / 1 Feb 17 12:45:01.843: INFO: Selector matched 1 pods for map[app:redis] Feb 17 12:45:01.843: INFO: Found 0 / 1 Feb 17 12:45:02.762: INFO: Selector matched 1 pods for map[app:redis] Feb 17 12:45:02.762: INFO: Found 0 / 1 Feb 17 12:45:03.655: INFO: Selector matched 1 pods for map[app:redis] Feb 17 12:45:03.655: INFO: Found 0 / 1 Feb 17 12:45:04.678: INFO: Selector matched 1 pods for map[app:redis] Feb 17 12:45:04.678: INFO: Found 1 / 1 Feb 17 12:45:04.678: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Feb 17 12:45:04.681: INFO: Selector matched 1 pods for map[app:redis] Feb 17 12:45:04.681: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Feb 17 12:45:04.681: INFO: wait on redis-master startup in e2e-tests-kubectl-2lrwc Feb 17 12:45:04.682: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-v48cd redis-master --namespace=e2e-tests-kubectl-2lrwc' Feb 17 12:45:04.895: INFO: stderr: "" Feb 17 12:45:04.895: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 17 Feb 12:45:03.719 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 17 Feb 12:45:03.720 # Server started, Redis version 3.2.12\n1:M 17 Feb 12:45:03.720 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 17 Feb 12:45:03.720 * The server is now ready to accept connections on port 6379\n" STEP: exposing RC Feb 17 12:45:04.895: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose rc redis-master --name=rm2 --port=1234 --target-port=6379 --namespace=e2e-tests-kubectl-2lrwc' Feb 17 12:45:05.183: INFO: stderr: "" Feb 17 12:45:05.183: INFO: stdout: "service/rm2 exposed\n" Feb 17 12:45:05.195: INFO: Service rm2 in namespace e2e-tests-kubectl-2lrwc found. STEP: exposing service Feb 17 12:45:07.216: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=e2e-tests-kubectl-2lrwc' Feb 17 12:45:07.514: INFO: stderr: "" Feb 17 12:45:07.515: INFO: stdout: "service/rm3 exposed\n" Feb 17 12:45:07.548: INFO: Service rm3 in namespace e2e-tests-kubectl-2lrwc found. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 17 12:45:09.565: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-2lrwc" for this suite. Feb 17 12:45:31.622: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 17 12:45:31.708: INFO: namespace: e2e-tests-kubectl-2lrwc, resource: bindings, ignored listing per whitelist Feb 17 12:45:31.802: INFO: namespace e2e-tests-kubectl-2lrwc deletion completed in 22.226952263s • [SLOW TEST:37.125 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl expose /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 17 12:45:31.803: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default;check="$$(dig +tcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;test -n "$$(getent hosts dns-querier-1.dns-test-service.e2e-tests-dns-9d4fr.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-9d4fr.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-9d4fr.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default;check="$$(dig +tcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;test -n "$$(getent hosts dns-querier-1.dns-test-service.e2e-tests-dns-9d4fr.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-9d4fr.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-9d4fr.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Feb 17 12:45:52.228: INFO: Unable to read jessie_udp@kubernetes.default from pod e2e-tests-dns-9d4fr/dns-test-6251708a-5183-11ea-a180-0242ac110008: the server could not find the requested resource (get pods dns-test-6251708a-5183-11ea-a180-0242ac110008) Feb 17 12:45:52.240: INFO: Unable to read jessie_tcp@kubernetes.default from pod e2e-tests-dns-9d4fr/dns-test-6251708a-5183-11ea-a180-0242ac110008: the server could not find the requested resource (get pods dns-test-6251708a-5183-11ea-a180-0242ac110008) Feb 17 12:45:52.250: INFO: Unable to read jessie_udp@kubernetes.default.svc from pod e2e-tests-dns-9d4fr/dns-test-6251708a-5183-11ea-a180-0242ac110008: the server could not find the requested resource (get pods dns-test-6251708a-5183-11ea-a180-0242ac110008) Feb 17 12:45:52.261: INFO: Unable to read jessie_tcp@kubernetes.default.svc from pod e2e-tests-dns-9d4fr/dns-test-6251708a-5183-11ea-a180-0242ac110008: the server could not find the requested resource (get pods dns-test-6251708a-5183-11ea-a180-0242ac110008) Feb 17 12:45:52.267: INFO: Unable to read jessie_udp@kubernetes.default.svc.cluster.local from pod e2e-tests-dns-9d4fr/dns-test-6251708a-5183-11ea-a180-0242ac110008: the server could not find the requested resource (get pods dns-test-6251708a-5183-11ea-a180-0242ac110008) Feb 17 12:45:52.275: INFO: Unable to read jessie_tcp@kubernetes.default.svc.cluster.local from pod e2e-tests-dns-9d4fr/dns-test-6251708a-5183-11ea-a180-0242ac110008: the server could not find the requested resource (get pods dns-test-6251708a-5183-11ea-a180-0242ac110008) Feb 17 12:45:52.283: INFO: Unable to read jessie_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-9d4fr.svc.cluster.local from pod e2e-tests-dns-9d4fr/dns-test-6251708a-5183-11ea-a180-0242ac110008: the server could not find the requested resource (get pods dns-test-6251708a-5183-11ea-a180-0242ac110008) Feb 17 12:45:52.290: INFO: Unable to read jessie_hosts@dns-querier-1 from pod e2e-tests-dns-9d4fr/dns-test-6251708a-5183-11ea-a180-0242ac110008: the server could not find the requested resource (get pods dns-test-6251708a-5183-11ea-a180-0242ac110008) Feb 17 12:45:52.295: INFO: Unable to read jessie_udp@PodARecord from pod e2e-tests-dns-9d4fr/dns-test-6251708a-5183-11ea-a180-0242ac110008: the server could not find the requested resource (get pods dns-test-6251708a-5183-11ea-a180-0242ac110008) Feb 17 12:45:52.303: INFO: Unable to read jessie_tcp@PodARecord from pod e2e-tests-dns-9d4fr/dns-test-6251708a-5183-11ea-a180-0242ac110008: the server could not find the requested resource (get pods dns-test-6251708a-5183-11ea-a180-0242ac110008) Feb 17 12:45:52.303: INFO: Lookups using e2e-tests-dns-9d4fr/dns-test-6251708a-5183-11ea-a180-0242ac110008 failed for: [jessie_udp@kubernetes.default jessie_tcp@kubernetes.default jessie_udp@kubernetes.default.svc jessie_tcp@kubernetes.default.svc jessie_udp@kubernetes.default.svc.cluster.local jessie_tcp@kubernetes.default.svc.cluster.local jessie_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-9d4fr.svc.cluster.local jessie_hosts@dns-querier-1 jessie_udp@PodARecord jessie_tcp@PodARecord] Feb 17 12:45:57.489: INFO: DNS probes using e2e-tests-dns-9d4fr/dns-test-6251708a-5183-11ea-a180-0242ac110008 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 17 12:45:57.698: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-dns-9d4fr" for this suite. Feb 17 12:46:07.960: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 17 12:46:08.081: INFO: namespace: e2e-tests-dns-9d4fr, resource: bindings, ignored listing per whitelist Feb 17 12:46:08.148: INFO: namespace e2e-tests-dns-9d4fr deletion completed in 10.385385575s • [SLOW TEST:36.345 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 17 12:46:08.148: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 17 12:46:21.501: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubelet-test-tjnt7" for this suite. Feb 17 12:46:27.597: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 17 12:46:27.772: INFO: namespace: e2e-tests-kubelet-test-tjnt7, resource: bindings, ignored listing per whitelist Feb 17 12:46:27.847: INFO: namespace e2e-tests-kubelet-test-tjnt7 deletion completed in 6.335104229s • [SLOW TEST:19.699 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78 should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 17 12:46:27.848: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102 [It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Feb 17 12:46:28.105: INFO: Creating simple daemon set daemon-set STEP: Check that daemon pods launch on every node of the cluster. Feb 17 12:46:28.263: INFO: Number of nodes with available pods: 0 Feb 17 12:46:28.264: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 17 12:46:29.297: INFO: Number of nodes with available pods: 0 Feb 17 12:46:29.297: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 17 12:46:30.907: INFO: Number of nodes with available pods: 0 Feb 17 12:46:30.907: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 17 12:46:31.294: INFO: Number of nodes with available pods: 0 Feb 17 12:46:31.294: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 17 12:46:32.291: INFO: Number of nodes with available pods: 0 Feb 17 12:46:32.291: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 17 12:46:33.284: INFO: Number of nodes with available pods: 0 Feb 17 12:46:33.284: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 17 12:46:34.664: INFO: Number of nodes with available pods: 0 Feb 17 12:46:34.665: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 17 12:46:35.427: INFO: Number of nodes with available pods: 0 Feb 17 12:46:35.427: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 17 12:46:36.626: INFO: Number of nodes with available pods: 0 Feb 17 12:46:36.626: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 17 12:46:37.296: INFO: Number of nodes with available pods: 0 Feb 17 12:46:37.296: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 17 12:46:38.284: INFO: Number of nodes with available pods: 1 Feb 17 12:46:38.284: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Update daemon pods image. STEP: Check that daemon pods images are updated. Feb 17 12:46:38.397: INFO: Wrong image for pod: daemon-set-gqbd8. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Feb 17 12:46:39.426: INFO: Wrong image for pod: daemon-set-gqbd8. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Feb 17 12:46:40.448: INFO: Wrong image for pod: daemon-set-gqbd8. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Feb 17 12:46:41.440: INFO: Wrong image for pod: daemon-set-gqbd8. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Feb 17 12:46:42.444: INFO: Wrong image for pod: daemon-set-gqbd8. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Feb 17 12:46:43.449: INFO: Wrong image for pod: daemon-set-gqbd8. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Feb 17 12:46:44.424: INFO: Wrong image for pod: daemon-set-gqbd8. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Feb 17 12:46:45.423: INFO: Wrong image for pod: daemon-set-gqbd8. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Feb 17 12:46:45.423: INFO: Pod daemon-set-gqbd8 is not available Feb 17 12:46:46.427: INFO: Wrong image for pod: daemon-set-gqbd8. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Feb 17 12:46:46.427: INFO: Pod daemon-set-gqbd8 is not available Feb 17 12:46:47.424: INFO: Wrong image for pod: daemon-set-gqbd8. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Feb 17 12:46:47.424: INFO: Pod daemon-set-gqbd8 is not available Feb 17 12:46:48.439: INFO: Wrong image for pod: daemon-set-gqbd8. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Feb 17 12:46:48.439: INFO: Pod daemon-set-gqbd8 is not available Feb 17 12:46:49.425: INFO: Wrong image for pod: daemon-set-gqbd8. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Feb 17 12:46:49.425: INFO: Pod daemon-set-gqbd8 is not available Feb 17 12:46:50.429: INFO: Wrong image for pod: daemon-set-gqbd8. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Feb 17 12:46:50.429: INFO: Pod daemon-set-gqbd8 is not available Feb 17 12:46:51.477: INFO: Wrong image for pod: daemon-set-gqbd8. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Feb 17 12:46:51.477: INFO: Pod daemon-set-gqbd8 is not available Feb 17 12:46:52.432: INFO: Wrong image for pod: daemon-set-gqbd8. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Feb 17 12:46:52.432: INFO: Pod daemon-set-gqbd8 is not available Feb 17 12:46:54.053: INFO: Pod daemon-set-wtr8w is not available STEP: Check that daemon pods are still running on every node of the cluster. Feb 17 12:46:54.669: INFO: Number of nodes with available pods: 0 Feb 17 12:46:54.669: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 17 12:46:55.928: INFO: Number of nodes with available pods: 0 Feb 17 12:46:55.929: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 17 12:46:56.699: INFO: Number of nodes with available pods: 0 Feb 17 12:46:56.699: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 17 12:46:57.684: INFO: Number of nodes with available pods: 0 Feb 17 12:46:57.684: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 17 12:46:58.683: INFO: Number of nodes with available pods: 0 Feb 17 12:46:58.683: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 17 12:46:59.700: INFO: Number of nodes with available pods: 0 Feb 17 12:46:59.700: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 17 12:47:00.848: INFO: Number of nodes with available pods: 0 Feb 17 12:47:00.848: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 17 12:47:01.692: INFO: Number of nodes with available pods: 0 Feb 17 12:47:01.692: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Feb 17 12:47:02.798: INFO: Number of nodes with available pods: 1 Feb 17 12:47:02.798: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-tq9jm, will wait for the garbage collector to delete the pods Feb 17 12:47:02.886: INFO: Deleting DaemonSet.extensions daemon-set took: 15.444283ms Feb 17 12:47:02.987: INFO: Terminating DaemonSet.extensions daemon-set pods took: 101.050976ms Feb 17 12:47:12.693: INFO: Number of nodes with available pods: 0 Feb 17 12:47:12.693: INFO: Number of running nodes: 0, number of available pods: 0 Feb 17 12:47:12.717: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-tq9jm/daemonsets","resourceVersion":"21981142"},"items":null} Feb 17 12:47:12.738: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-tq9jm/pods","resourceVersion":"21981142"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 17 12:47:12.774: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-daemonsets-tq9jm" for this suite. Feb 17 12:47:18.903: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 17 12:47:19.046: INFO: namespace: e2e-tests-daemonsets-tq9jm, resource: bindings, ignored listing per whitelist Feb 17 12:47:19.071: INFO: namespace e2e-tests-daemonsets-tq9jm deletion completed in 6.271083883s • [SLOW TEST:51.223 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-apps] ReplicationController should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 17 12:47:19.071: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Given a Pod with a 'name' label pod-adoption is created STEP: When a replication controller with a matching selector is created STEP: Then the orphan pod is adopted [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 17 12:47:32.634: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-replication-controller-t82qw" for this suite. Feb 17 12:47:58.789: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 17 12:47:58.930: INFO: namespace: e2e-tests-replication-controller-t82qw, resource: bindings, ignored listing per whitelist Feb 17 12:47:58.970: INFO: namespace e2e-tests-replication-controller-t82qw deletion completed in 26.305681017s • [SLOW TEST:39.899 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 17 12:47:58.971: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0644 on tmpfs Feb 17 12:47:59.257: INFO: Waiting up to 5m0s for pod "pod-ba0c7742-5183-11ea-a180-0242ac110008" in namespace "e2e-tests-emptydir-sql2p" to be "success or failure" Feb 17 12:47:59.265: INFO: Pod "pod-ba0c7742-5183-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 7.859152ms Feb 17 12:48:01.766: INFO: Pod "pod-ba0c7742-5183-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.508021598s Feb 17 12:48:03.791: INFO: Pod "pod-ba0c7742-5183-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.533690617s Feb 17 12:48:06.454: INFO: Pod "pod-ba0c7742-5183-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 7.196593623s Feb 17 12:48:08.512: INFO: Pod "pod-ba0c7742-5183-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 9.254513808s Feb 17 12:48:10.604: INFO: Pod "pod-ba0c7742-5183-11ea-a180-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.346696431s STEP: Saw pod success Feb 17 12:48:10.604: INFO: Pod "pod-ba0c7742-5183-11ea-a180-0242ac110008" satisfied condition "success or failure" Feb 17 12:48:10.616: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-ba0c7742-5183-11ea-a180-0242ac110008 container test-container: STEP: delete the pod Feb 17 12:48:10.886: INFO: Waiting for pod pod-ba0c7742-5183-11ea-a180-0242ac110008 to disappear Feb 17 12:48:10.909: INFO: Pod pod-ba0c7742-5183-11ea-a180-0242ac110008 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 17 12:48:10.909: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-sql2p" for this suite. Feb 17 12:48:16.959: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 17 12:48:17.008: INFO: namespace: e2e-tests-emptydir-sql2p, resource: bindings, ignored listing per whitelist Feb 17 12:48:17.073: INFO: namespace e2e-tests-emptydir-sql2p deletion completed in 6.155976285s • [SLOW TEST:18.103 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 17 12:48:17.074: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0644 on tmpfs Feb 17 12:48:17.429: INFO: Waiting up to 5m0s for pod "pod-c4dcb60a-5183-11ea-a180-0242ac110008" in namespace "e2e-tests-emptydir-jhxvd" to be "success or failure" Feb 17 12:48:17.475: INFO: Pod "pod-c4dcb60a-5183-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 45.909103ms Feb 17 12:48:19.487: INFO: Pod "pod-c4dcb60a-5183-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.058430715s Feb 17 12:48:21.504: INFO: Pod "pod-c4dcb60a-5183-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.0748975s Feb 17 12:48:23.557: INFO: Pod "pod-c4dcb60a-5183-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.127910453s Feb 17 12:48:25.575: INFO: Pod "pod-c4dcb60a-5183-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.146394543s Feb 17 12:48:27.640: INFO: Pod "pod-c4dcb60a-5183-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 10.211428362s Feb 17 12:48:29.651: INFO: Pod "pod-c4dcb60a-5183-11ea-a180-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.221777895s STEP: Saw pod success Feb 17 12:48:29.651: INFO: Pod "pod-c4dcb60a-5183-11ea-a180-0242ac110008" satisfied condition "success or failure" Feb 17 12:48:29.654: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-c4dcb60a-5183-11ea-a180-0242ac110008 container test-container: STEP: delete the pod Feb 17 12:48:29.714: INFO: Waiting for pod pod-c4dcb60a-5183-11ea-a180-0242ac110008 to disappear Feb 17 12:48:30.725: INFO: Pod pod-c4dcb60a-5183-11ea-a180-0242ac110008 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 17 12:48:30.725: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-jhxvd" for this suite. Feb 17 12:48:37.518: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 17 12:48:37.557: INFO: namespace: e2e-tests-emptydir-jhxvd, resource: bindings, ignored listing per whitelist Feb 17 12:48:37.716: INFO: namespace e2e-tests-emptydir-jhxvd deletion completed in 6.975474381s • [SLOW TEST:20.642 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 17 12:48:37.716: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Feb 17 12:48:37.984: INFO: (0) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/:
alternatives.log
alternatives.l... (200; 17.492789ms)
Feb 17 12:48:37.995: INFO: (1) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 10.276546ms)
Feb 17 12:48:38.028: INFO: (2) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 32.893588ms)
Feb 17 12:48:38.035: INFO: (3) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.214543ms)
Feb 17 12:48:38.047: INFO: (4) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 12.015179ms)
Feb 17 12:48:38.055: INFO: (5) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.754164ms)
Feb 17 12:48:38.060: INFO: (6) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.539599ms)
Feb 17 12:48:38.065: INFO: (7) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.360184ms)
Feb 17 12:48:38.071: INFO: (8) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.095623ms)
Feb 17 12:48:38.076: INFO: (9) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.40911ms)
Feb 17 12:48:38.080: INFO: (10) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.740412ms)
Feb 17 12:48:38.085: INFO: (11) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.276303ms)
Feb 17 12:48:38.098: INFO: (12) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 13.341066ms)
Feb 17 12:48:38.114: INFO: (13) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 15.335297ms)
Feb 17 12:48:38.120: INFO: (14) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.668913ms)
Feb 17 12:48:38.124: INFO: (15) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.044476ms)
Feb 17 12:48:38.132: INFO: (16) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.162344ms)
Feb 17 12:48:38.137: INFO: (17) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.507919ms)
Feb 17 12:48:38.147: INFO: (18) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 9.482183ms)
Feb 17 12:48:38.155: INFO: (19) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 8.504395ms)
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 17 12:48:38.156: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-proxy-5xmh2" for this suite.
Feb 17 12:48:44.204: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 17 12:48:44.314: INFO: namespace: e2e-tests-proxy-5xmh2, resource: bindings, ignored listing per whitelist
Feb 17 12:48:44.390: INFO: namespace e2e-tests-proxy-5xmh2 deletion completed in 6.225594832s

• [SLOW TEST:6.675 seconds]
[sig-network] Proxy
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:56
    should proxy logs on node using proxy subresource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-network] Proxy version v1 
  should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 17 12:48:44.391: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Feb 17 12:48:44.679: INFO: (0) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 8.6898ms)
Feb 17 12:48:44.690: INFO: (1) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 10.077296ms)
Feb 17 12:48:44.696: INFO: (2) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.091581ms)
Feb 17 12:48:44.702: INFO: (3) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.810831ms)
Feb 17 12:48:44.707: INFO: (4) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.120954ms)
Feb 17 12:48:44.712: INFO: (5) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.3135ms)
Feb 17 12:48:44.716: INFO: (6) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.195795ms)
Feb 17 12:48:44.721: INFO: (7) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.19359ms)
Feb 17 12:48:44.726: INFO: (8) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.187416ms)
Feb 17 12:48:44.730: INFO: (9) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.702457ms)
Feb 17 12:48:44.734: INFO: (10) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.927933ms)
Feb 17 12:48:44.738: INFO: (11) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.858438ms)
Feb 17 12:48:44.742: INFO: (12) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.706128ms)
Feb 17 12:48:44.747: INFO: (13) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.602558ms)
Feb 17 12:48:44.752: INFO: (14) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.114037ms)
Feb 17 12:48:44.801: INFO: (15) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 48.958711ms)
Feb 17 12:48:44.808: INFO: (16) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.159701ms)
Feb 17 12:48:44.814: INFO: (17) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.23805ms)
Feb 17 12:48:44.825: INFO: (18) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 10.197254ms)
Feb 17 12:48:44.830: INFO: (19) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.649302ms)
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 17 12:48:44.830: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-proxy-s7wdd" for this suite.
Feb 17 12:48:50.883: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 17 12:48:51.053: INFO: namespace: e2e-tests-proxy-s7wdd, resource: bindings, ignored listing per whitelist
Feb 17 12:48:51.116: INFO: namespace e2e-tests-proxy-s7wdd deletion completed in 6.279149148s

• [SLOW TEST:6.725 seconds]
[sig-network] Proxy
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:56
    should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-node] Downward API 
  should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 17 12:48:51.116: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward api env vars
Feb 17 12:48:51.444: INFO: Waiting up to 5m0s for pod "downward-api-d92b1491-5183-11ea-a180-0242ac110008" in namespace "e2e-tests-downward-api-vq4sh" to be "success or failure"
Feb 17 12:48:51.500: INFO: Pod "downward-api-d92b1491-5183-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 55.971823ms
Feb 17 12:48:53.694: INFO: Pod "downward-api-d92b1491-5183-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.250089625s
Feb 17 12:48:55.724: INFO: Pod "downward-api-d92b1491-5183-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.280165361s
Feb 17 12:48:59.039: INFO: Pod "downward-api-d92b1491-5183-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 7.595170116s
Feb 17 12:49:01.057: INFO: Pod "downward-api-d92b1491-5183-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 9.613121678s
Feb 17 12:49:03.110: INFO: Pod "downward-api-d92b1491-5183-11ea-a180-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.666115837s
STEP: Saw pod success
Feb 17 12:49:03.111: INFO: Pod "downward-api-d92b1491-5183-11ea-a180-0242ac110008" satisfied condition "success or failure"
Feb 17 12:49:03.125: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downward-api-d92b1491-5183-11ea-a180-0242ac110008 container dapi-container: 
STEP: delete the pod
Feb 17 12:49:03.353: INFO: Waiting for pod downward-api-d92b1491-5183-11ea-a180-0242ac110008 to disappear
Feb 17 12:49:03.372: INFO: Pod downward-api-d92b1491-5183-11ea-a180-0242ac110008 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 17 12:49:03.372: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-vq4sh" for this suite.
Feb 17 12:49:09.470: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 17 12:49:09.559: INFO: namespace: e2e-tests-downward-api-vq4sh, resource: bindings, ignored listing per whitelist
Feb 17 12:49:09.635: INFO: namespace e2e-tests-downward-api-vq4sh deletion completed in 6.254595421s

• [SLOW TEST:18.519 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38
  should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run rc 
  should create an rc from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 17 12:49:09.636: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1298
[It] should create an rc from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Feb 17 12:49:10.105: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=e2e-tests-kubectl-x8r9f'
Feb 17 12:49:10.314: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Feb 17 12:49:10.314: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n"
STEP: verifying the rc e2e-test-nginx-rc was created
STEP: verifying the pod controlled by rc e2e-test-nginx-rc was created
STEP: confirm that you can get logs from an rc
Feb 17 12:49:10.386: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-nginx-rc-tmvhs]
Feb 17 12:49:10.386: INFO: Waiting up to 5m0s for pod "e2e-test-nginx-rc-tmvhs" in namespace "e2e-tests-kubectl-x8r9f" to be "running and ready"
Feb 17 12:49:10.469: INFO: Pod "e2e-test-nginx-rc-tmvhs": Phase="Pending", Reason="", readiness=false. Elapsed: 82.055633ms
Feb 17 12:49:12.510: INFO: Pod "e2e-test-nginx-rc-tmvhs": Phase="Pending", Reason="", readiness=false. Elapsed: 2.123331344s
Feb 17 12:49:14.556: INFO: Pod "e2e-test-nginx-rc-tmvhs": Phase="Pending", Reason="", readiness=false. Elapsed: 4.169162482s
Feb 17 12:49:16.617: INFO: Pod "e2e-test-nginx-rc-tmvhs": Phase="Pending", Reason="", readiness=false. Elapsed: 6.23050783s
Feb 17 12:49:18.644: INFO: Pod "e2e-test-nginx-rc-tmvhs": Phase="Pending", Reason="", readiness=false. Elapsed: 8.257739115s
Feb 17 12:49:20.660: INFO: Pod "e2e-test-nginx-rc-tmvhs": Phase="Pending", Reason="", readiness=false. Elapsed: 10.273636093s
Feb 17 12:49:22.681: INFO: Pod "e2e-test-nginx-rc-tmvhs": Phase="Running", Reason="", readiness=true. Elapsed: 12.294494112s
Feb 17 12:49:22.681: INFO: Pod "e2e-test-nginx-rc-tmvhs" satisfied condition "running and ready"
Feb 17 12:49:22.681: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-nginx-rc-tmvhs]
Feb 17 12:49:22.682: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs rc/e2e-test-nginx-rc --namespace=e2e-tests-kubectl-x8r9f'
Feb 17 12:49:22.932: INFO: stderr: ""
Feb 17 12:49:22.932: INFO: stdout: ""
[AfterEach] [k8s.io] Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1303
Feb 17 12:49:22.932: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=e2e-tests-kubectl-x8r9f'
Feb 17 12:49:23.128: INFO: stderr: ""
Feb 17 12:49:23.129: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 17 12:49:23.129: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-x8r9f" for this suite.
Feb 17 12:49:47.322: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 17 12:49:47.409: INFO: namespace: e2e-tests-kubectl-x8r9f, resource: bindings, ignored listing per whitelist
Feb 17 12:49:47.480: INFO: namespace e2e-tests-kubectl-x8r9f deletion completed in 24.338467998s

• [SLOW TEST:37.845 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create an rc from an image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 17 12:49:47.481: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-volume-faba0f7e-5183-11ea-a180-0242ac110008
STEP: Creating a pod to test consume configMaps
Feb 17 12:49:47.765: INFO: Waiting up to 5m0s for pod "pod-configmaps-fabcb04f-5183-11ea-a180-0242ac110008" in namespace "e2e-tests-configmap-w5hr5" to be "success or failure"
Feb 17 12:49:47.799: INFO: Pod "pod-configmaps-fabcb04f-5183-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 33.516612ms
Feb 17 12:49:49.970: INFO: Pod "pod-configmaps-fabcb04f-5183-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.205200714s
Feb 17 12:49:51.988: INFO: Pod "pod-configmaps-fabcb04f-5183-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.222351492s
Feb 17 12:49:54.016: INFO: Pod "pod-configmaps-fabcb04f-5183-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.250519557s
Feb 17 12:49:56.041: INFO: Pod "pod-configmaps-fabcb04f-5183-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.276207894s
Feb 17 12:49:58.126: INFO: Pod "pod-configmaps-fabcb04f-5183-11ea-a180-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.360393799s
STEP: Saw pod success
Feb 17 12:49:58.126: INFO: Pod "pod-configmaps-fabcb04f-5183-11ea-a180-0242ac110008" satisfied condition "success or failure"
Feb 17 12:49:58.192: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-fabcb04f-5183-11ea-a180-0242ac110008 container configmap-volume-test: 
STEP: delete the pod
Feb 17 12:49:58.506: INFO: Waiting for pod pod-configmaps-fabcb04f-5183-11ea-a180-0242ac110008 to disappear
Feb 17 12:49:58.565: INFO: Pod pod-configmaps-fabcb04f-5183-11ea-a180-0242ac110008 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 17 12:49:58.565: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-w5hr5" for this suite.
Feb 17 12:50:04.729: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 17 12:50:04.966: INFO: namespace: e2e-tests-configmap-w5hr5, resource: bindings, ignored listing per whitelist
Feb 17 12:50:04.971: INFO: namespace e2e-tests-configmap-w5hr5 deletion completed in 6.385123589s

• [SLOW TEST:17.490 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 17 12:50:04.971: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0777 on tmpfs
Feb 17 12:50:05.086: INFO: Waiting up to 5m0s for pod "pod-0510824b-5184-11ea-a180-0242ac110008" in namespace "e2e-tests-emptydir-h8x27" to be "success or failure"
Feb 17 12:50:05.094: INFO: Pod "pod-0510824b-5184-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.311732ms
Feb 17 12:50:07.106: INFO: Pod "pod-0510824b-5184-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020428632s
Feb 17 12:50:09.125: INFO: Pod "pod-0510824b-5184-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.039310766s
Feb 17 12:50:12.144: INFO: Pod "pod-0510824b-5184-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 7.058011431s
Feb 17 12:50:14.181: INFO: Pod "pod-0510824b-5184-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 9.095426822s
Feb 17 12:50:16.203: INFO: Pod "pod-0510824b-5184-11ea-a180-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.11762051s
STEP: Saw pod success
Feb 17 12:50:16.204: INFO: Pod "pod-0510824b-5184-11ea-a180-0242ac110008" satisfied condition "success or failure"
Feb 17 12:50:16.208: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-0510824b-5184-11ea-a180-0242ac110008 container test-container: 
STEP: delete the pod
Feb 17 12:50:16.339: INFO: Waiting for pod pod-0510824b-5184-11ea-a180-0242ac110008 to disappear
Feb 17 12:50:16.359: INFO: Pod pod-0510824b-5184-11ea-a180-0242ac110008 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 17 12:50:16.360: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-h8x27" for this suite.
Feb 17 12:50:22.429: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 17 12:50:22.505: INFO: namespace: e2e-tests-emptydir-h8x27, resource: bindings, ignored listing per whitelist
Feb 17 12:50:22.745: INFO: namespace e2e-tests-emptydir-h8x27 deletion completed in 6.367139542s

• [SLOW TEST:17.774 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0777,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-storage] Downward API volume 
  should set mode on item file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 17 12:50:22.746: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should set mode on item file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Feb 17 12:50:23.024: INFO: Waiting up to 5m0s for pod "downwardapi-volume-0fba516b-5184-11ea-a180-0242ac110008" in namespace "e2e-tests-downward-api-rh2g7" to be "success or failure"
Feb 17 12:50:23.059: INFO: Pod "downwardapi-volume-0fba516b-5184-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 34.250632ms
Feb 17 12:50:25.360: INFO: Pod "downwardapi-volume-0fba516b-5184-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.335511388s
Feb 17 12:50:27.392: INFO: Pod "downwardapi-volume-0fba516b-5184-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.367356439s
Feb 17 12:50:29.684: INFO: Pod "downwardapi-volume-0fba516b-5184-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.659532067s
Feb 17 12:50:31.702: INFO: Pod "downwardapi-volume-0fba516b-5184-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.677998631s
Feb 17 12:50:33.718: INFO: Pod "downwardapi-volume-0fba516b-5184-11ea-a180-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.693600902s
STEP: Saw pod success
Feb 17 12:50:33.718: INFO: Pod "downwardapi-volume-0fba516b-5184-11ea-a180-0242ac110008" satisfied condition "success or failure"
Feb 17 12:50:33.736: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-0fba516b-5184-11ea-a180-0242ac110008 container client-container: 
STEP: delete the pod
Feb 17 12:50:34.789: INFO: Waiting for pod downwardapi-volume-0fba516b-5184-11ea-a180-0242ac110008 to disappear
Feb 17 12:50:34.810: INFO: Pod downwardapi-volume-0fba516b-5184-11ea-a180-0242ac110008 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 17 12:50:34.810: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-rh2g7" for this suite.
Feb 17 12:50:40.906: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 17 12:50:40.961: INFO: namespace: e2e-tests-downward-api-rh2g7, resource: bindings, ignored listing per whitelist
Feb 17 12:50:41.082: INFO: namespace e2e-tests-downward-api-rh2g7 deletion completed in 6.260855443s

• [SLOW TEST:18.337 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should set mode on item file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 17 12:50:41.083: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-map-1aa303b3-5184-11ea-a180-0242ac110008
STEP: Creating a pod to test consume secrets
Feb 17 12:50:41.387: INFO: Waiting up to 5m0s for pod "pod-secrets-1ab384e6-5184-11ea-a180-0242ac110008" in namespace "e2e-tests-secrets-52d2q" to be "success or failure"
Feb 17 12:50:41.397: INFO: Pod "pod-secrets-1ab384e6-5184-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 9.19831ms
Feb 17 12:50:43.689: INFO: Pod "pod-secrets-1ab384e6-5184-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.301148172s
Feb 17 12:50:45.713: INFO: Pod "pod-secrets-1ab384e6-5184-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.325927873s
Feb 17 12:50:48.124: INFO: Pod "pod-secrets-1ab384e6-5184-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.73685721s
Feb 17 12:50:50.248: INFO: Pod "pod-secrets-1ab384e6-5184-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.860563726s
Feb 17 12:50:52.259: INFO: Pod "pod-secrets-1ab384e6-5184-11ea-a180-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.871117981s
STEP: Saw pod success
Feb 17 12:50:52.259: INFO: Pod "pod-secrets-1ab384e6-5184-11ea-a180-0242ac110008" satisfied condition "success or failure"
Feb 17 12:50:52.265: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-1ab384e6-5184-11ea-a180-0242ac110008 container secret-volume-test: 
STEP: delete the pod
Feb 17 12:50:52.604: INFO: Waiting for pod pod-secrets-1ab384e6-5184-11ea-a180-0242ac110008 to disappear
Feb 17 12:50:52.627: INFO: Pod pod-secrets-1ab384e6-5184-11ea-a180-0242ac110008 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 17 12:50:52.627: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-52d2q" for this suite.
Feb 17 12:50:58.782: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 17 12:50:58.828: INFO: namespace: e2e-tests-secrets-52d2q, resource: bindings, ignored listing per whitelist
Feb 17 12:50:58.981: INFO: namespace e2e-tests-secrets-52d2q deletion completed in 6.281532041s

• [SLOW TEST:17.898 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 17 12:50:58.982: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the rc
STEP: delete the rc
STEP: wait for all pods to be garbage collected
STEP: Gathering metrics
W0217 12:51:09.386120       8 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Feb 17 12:51:09.386: INFO: For apiserver_request_count:
For apiserver_request_latencies_summary:
For etcd_helper_cache_entry_count:
For etcd_helper_cache_hit_count:
For etcd_helper_cache_miss_count:
For etcd_request_cache_add_latencies_summary:
For etcd_request_cache_get_latencies_summary:
For etcd_request_latencies_summary:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 17 12:51:09.386: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-gc-fm8p8" for this suite.
Feb 17 12:51:15.487: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 17 12:51:15.556: INFO: namespace: e2e-tests-gc-fm8p8, resource: bindings, ignored listing per whitelist
Feb 17 12:51:15.662: INFO: namespace e2e-tests-gc-fm8p8 deletion completed in 6.265938786s

• [SLOW TEST:16.681 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 17 12:51:15.663: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating projection with secret that has name projected-secret-test-2f426ba1-5184-11ea-a180-0242ac110008
STEP: Creating a pod to test consume secrets
Feb 17 12:51:15.889: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-2f43a890-5184-11ea-a180-0242ac110008" in namespace "e2e-tests-projected-v6v28" to be "success or failure"
Feb 17 12:51:15.897: INFO: Pod "pod-projected-secrets-2f43a890-5184-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 7.801044ms
Feb 17 12:51:17.908: INFO: Pod "pod-projected-secrets-2f43a890-5184-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018709925s
Feb 17 12:51:19.925: INFO: Pod "pod-projected-secrets-2f43a890-5184-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.036422345s
Feb 17 12:51:22.047: INFO: Pod "pod-projected-secrets-2f43a890-5184-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.157887395s
Feb 17 12:51:24.140: INFO: Pod "pod-projected-secrets-2f43a890-5184-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.250680366s
Feb 17 12:51:26.165: INFO: Pod "pod-projected-secrets-2f43a890-5184-11ea-a180-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.276055161s
STEP: Saw pod success
Feb 17 12:51:26.165: INFO: Pod "pod-projected-secrets-2f43a890-5184-11ea-a180-0242ac110008" satisfied condition "success or failure"
Feb 17 12:51:26.177: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-2f43a890-5184-11ea-a180-0242ac110008 container projected-secret-volume-test: 
STEP: delete the pod
Feb 17 12:51:26.359: INFO: Waiting for pod pod-projected-secrets-2f43a890-5184-11ea-a180-0242ac110008 to disappear
Feb 17 12:51:26.370: INFO: Pod pod-projected-secrets-2f43a890-5184-11ea-a180-0242ac110008 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 17 12:51:26.370: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-v6v28" for this suite.
Feb 17 12:51:32.460: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 17 12:51:32.609: INFO: namespace: e2e-tests-projected-v6v28, resource: bindings, ignored listing per whitelist
Feb 17 12:51:32.845: INFO: namespace e2e-tests-projected-v6v28 deletion completed in 6.466409209s

• [SLOW TEST:17.182 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 17 12:51:32.845: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79
Feb 17 12:51:33.106: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Feb 17 12:51:33.120: INFO: Waiting for terminating namespaces to be deleted...
Feb 17 12:51:33.124: INFO: 
Logging pods the kubelet thinks is on node hunter-server-hu5at5svl7ps before test
Feb 17 12:51:33.140: INFO: etcd-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Feb 17 12:51:33.140: INFO: weave-net-tqwf2 from kube-system started at 2019-08-04 08:33:23 +0000 UTC (2 container statuses recorded)
Feb 17 12:51:33.140: INFO: 	Container weave ready: true, restart count 0
Feb 17 12:51:33.140: INFO: 	Container weave-npc ready: true, restart count 0
Feb 17 12:51:33.140: INFO: coredns-54ff9cd656-bmkk4 from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded)
Feb 17 12:51:33.140: INFO: 	Container coredns ready: true, restart count 0
Feb 17 12:51:33.140: INFO: kube-controller-manager-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Feb 17 12:51:33.140: INFO: kube-apiserver-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Feb 17 12:51:33.140: INFO: kube-scheduler-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Feb 17 12:51:33.140: INFO: coredns-54ff9cd656-79kxx from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded)
Feb 17 12:51:33.140: INFO: 	Container coredns ready: true, restart count 0
Feb 17 12:51:33.140: INFO: kube-proxy-bqnnz from kube-system started at 2019-08-04 08:33:23 +0000 UTC (1 container statuses recorded)
Feb 17 12:51:33.140: INFO: 	Container kube-proxy ready: true, restart count 0
[It] validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Trying to schedule Pod with nonempty NodeSelector.
STEP: Considering event: 
Type = [Warning], Name = [restricted-pod.15f43196df701c87], Reason = [FailedScheduling], Message = [0/1 nodes are available: 1 node(s) didn't match node selector.]
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 17 12:51:34.199: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-sched-pred-7r5lg" for this suite.
Feb 17 12:51:40.252: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 17 12:51:40.458: INFO: namespace: e2e-tests-sched-pred-7r5lg, resource: bindings, ignored listing per whitelist
Feb 17 12:51:40.579: INFO: namespace e2e-tests-sched-pred-7r5lg deletion completed in 6.370569127s
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70

• [SLOW TEST:7.735 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22
  validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 17 12:51:40.580: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0777 on node default medium
Feb 17 12:51:40.970: INFO: Waiting up to 5m0s for pod "pod-3e3617cd-5184-11ea-a180-0242ac110008" in namespace "e2e-tests-emptydir-fgx4n" to be "success or failure"
Feb 17 12:51:40.994: INFO: Pod "pod-3e3617cd-5184-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 23.866672ms
Feb 17 12:51:43.019: INFO: Pod "pod-3e3617cd-5184-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.04806287s
Feb 17 12:51:45.066: INFO: Pod "pod-3e3617cd-5184-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.095946653s
Feb 17 12:51:47.084: INFO: Pod "pod-3e3617cd-5184-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.113691995s
Feb 17 12:51:49.099: INFO: Pod "pod-3e3617cd-5184-11ea-a180-0242ac110008": Phase="Running", Reason="", readiness=true. Elapsed: 8.128259236s
Feb 17 12:51:51.139: INFO: Pod "pod-3e3617cd-5184-11ea-a180-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.168049583s
STEP: Saw pod success
Feb 17 12:51:51.139: INFO: Pod "pod-3e3617cd-5184-11ea-a180-0242ac110008" satisfied condition "success or failure"
Feb 17 12:51:51.147: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-3e3617cd-5184-11ea-a180-0242ac110008 container test-container: 
STEP: delete the pod
Feb 17 12:51:51.392: INFO: Waiting for pod pod-3e3617cd-5184-11ea-a180-0242ac110008 to disappear
Feb 17 12:51:51.404: INFO: Pod pod-3e3617cd-5184-11ea-a180-0242ac110008 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 17 12:51:51.404: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-fgx4n" for this suite.
Feb 17 12:51:57.601: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 17 12:51:57.694: INFO: namespace: e2e-tests-emptydir-fgx4n, resource: bindings, ignored listing per whitelist
Feb 17 12:51:57.790: INFO: namespace e2e-tests-emptydir-fgx4n deletion completed in 6.313046477s

• [SLOW TEST:17.210 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0777,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 17 12:51:57.791: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79
Feb 17 12:51:58.035: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Feb 17 12:51:58.131: INFO: Waiting for terminating namespaces to be deleted...
Feb 17 12:51:58.150: INFO: 
Logging pods the kubelet thinks is on node hunter-server-hu5at5svl7ps before test
Feb 17 12:51:58.183: INFO: coredns-54ff9cd656-79kxx from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded)
Feb 17 12:51:58.183: INFO: 	Container coredns ready: true, restart count 0
Feb 17 12:51:58.183: INFO: kube-proxy-bqnnz from kube-system started at 2019-08-04 08:33:23 +0000 UTC (1 container statuses recorded)
Feb 17 12:51:58.183: INFO: 	Container kube-proxy ready: true, restart count 0
Feb 17 12:51:58.183: INFO: etcd-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Feb 17 12:51:58.183: INFO: weave-net-tqwf2 from kube-system started at 2019-08-04 08:33:23 +0000 UTC (2 container statuses recorded)
Feb 17 12:51:58.183: INFO: 	Container weave ready: true, restart count 0
Feb 17 12:51:58.183: INFO: 	Container weave-npc ready: true, restart count 0
Feb 17 12:51:58.183: INFO: coredns-54ff9cd656-bmkk4 from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded)
Feb 17 12:51:58.183: INFO: 	Container coredns ready: true, restart count 0
Feb 17 12:51:58.183: INFO: kube-controller-manager-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Feb 17 12:51:58.183: INFO: kube-apiserver-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Feb 17 12:51:58.183: INFO: kube-scheduler-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
[It] validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Trying to launch a pod without a label to get a node which can launch it.
STEP: Explicitly delete pod here to free the resource it takes.
STEP: Trying to apply a random label on the found node.
STEP: verifying the node has the label kubernetes.io/e2e-4e8d1c55-5184-11ea-a180-0242ac110008 42
STEP: Trying to relaunch the pod, now with labels.
STEP: removing the label kubernetes.io/e2e-4e8d1c55-5184-11ea-a180-0242ac110008 off the node hunter-server-hu5at5svl7ps
STEP: verifying the node doesn't have the label kubernetes.io/e2e-4e8d1c55-5184-11ea-a180-0242ac110008
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 17 12:52:22.829: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-sched-pred-stbt6" for this suite.
Feb 17 12:52:36.890: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 17 12:52:37.060: INFO: namespace: e2e-tests-sched-pred-stbt6, resource: bindings, ignored listing per whitelist
Feb 17 12:52:37.115: INFO: namespace e2e-tests-sched-pred-stbt6 deletion completed in 14.27314311s
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70

• [SLOW TEST:39.324 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22
  validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 17 12:52:37.116: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65
[It] deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Feb 17 12:52:37.348: INFO: Pod name rollover-pod: Found 0 pods out of 1
Feb 17 12:52:42.917: INFO: Pod name rollover-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Feb 17 12:52:47.779: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready
Feb 17 12:52:49.798: INFO: Creating deployment "test-rollover-deployment"
Feb 17 12:52:49.856: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations
Feb 17 12:52:51.897: INFO: Check revision of new replica set for deployment "test-rollover-deployment"
Feb 17 12:52:51.914: INFO: Ensure that both replica sets have 1 created replica
Feb 17 12:52:51.925: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update
Feb 17 12:52:51.955: INFO: Updating deployment test-rollover-deployment
Feb 17 12:52:51.955: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller
Feb 17 12:52:54.854: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2
Feb 17 12:52:55.689: INFO: Make sure deployment "test-rollover-deployment" is complete
Feb 17 12:52:55.897: INFO: all replica sets need to contain the pod-template-hash label
Feb 17 12:52:55.897: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717540770, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717540770, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717540773, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717540770, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 17 12:52:57.995: INFO: all replica sets need to contain the pod-template-hash label
Feb 17 12:52:57.995: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717540770, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717540770, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717540773, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717540770, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 17 12:52:59.946: INFO: all replica sets need to contain the pod-template-hash label
Feb 17 12:52:59.946: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717540770, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717540770, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717540773, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717540770, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 17 12:53:01.945: INFO: all replica sets need to contain the pod-template-hash label
Feb 17 12:53:01.946: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717540770, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717540770, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717540773, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717540770, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 17 12:53:03.920: INFO: all replica sets need to contain the pod-template-hash label
Feb 17 12:53:03.920: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717540770, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717540770, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717540782, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717540770, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 17 12:53:05.916: INFO: all replica sets need to contain the pod-template-hash label
Feb 17 12:53:05.916: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717540770, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717540770, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717540782, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717540770, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 17 12:53:07.920: INFO: all replica sets need to contain the pod-template-hash label
Feb 17 12:53:07.920: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717540770, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717540770, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717540782, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717540770, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 17 12:53:09.929: INFO: all replica sets need to contain the pod-template-hash label
Feb 17 12:53:09.930: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717540770, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717540770, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717540782, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717540770, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 17 12:53:11.931: INFO: all replica sets need to contain the pod-template-hash label
Feb 17 12:53:11.931: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717540770, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717540770, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717540782, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717540770, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 17 12:53:14.122: INFO: 
Feb 17 12:53:14.122: INFO: Ensure that both old replica sets have no replicas
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59
Feb 17 12:53:14.140: INFO: Deployment "test-rollover-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment,GenerateName:,Namespace:e2e-tests-deployment-p7smt,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-p7smt/deployments/test-rollover-deployment,UID:6740d424-5184-11ea-a994-fa163e34d433,ResourceVersion:21982046,Generation:2,CreationTimestamp:2020-02-17 12:52:49 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-02-17 12:52:50 +0000 UTC 2020-02-17 12:52:50 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-02-17 12:53:12 +0000 UTC 2020-02-17 12:52:50 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rollover-deployment-5b8479fdb6" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},}

Feb 17 12:53:14.145: INFO: New ReplicaSet "test-rollover-deployment-5b8479fdb6" of Deployment "test-rollover-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-5b8479fdb6,GenerateName:,Namespace:e2e-tests-deployment-p7smt,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-p7smt/replicasets/test-rollover-deployment-5b8479fdb6,UID:688a1269-5184-11ea-a994-fa163e34d433,ResourceVersion:21982037,Generation:2,CreationTimestamp:2020-02-17 12:52:51 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 6740d424-5184-11ea-a994-fa163e34d433 0xc00231ff37 0xc00231ff38}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},}
Feb 17 12:53:14.145: INFO: All old ReplicaSets of Deployment "test-rollover-deployment":
Feb 17 12:53:14.146: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-controller,GenerateName:,Namespace:e2e-tests-deployment-p7smt,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-p7smt/replicasets/test-rollover-controller,UID:5fd07a12-5184-11ea-a994-fa163e34d433,ResourceVersion:21982045,Generation:2,CreationTimestamp:2020-02-17 12:52:37 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 6740d424-5184-11ea-a994-fa163e34d433 0xc00231fd67 0xc00231fd68}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Feb 17 12:53:14.146: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-58494b7559,GenerateName:,Namespace:e2e-tests-deployment-p7smt,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-p7smt/replicasets/test-rollover-deployment-58494b7559,UID:675b11db-5184-11ea-a994-fa163e34d433,ResourceVersion:21982001,Generation:2,CreationTimestamp:2020-02-17 12:52:49 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 58494b7559,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 6740d424-5184-11ea-a994-fa163e34d433 0xc00231fe67 0xc00231fe68}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 58494b7559,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 58494b7559,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Feb 17 12:53:14.152: INFO: Pod "test-rollover-deployment-5b8479fdb6-5md2s" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-5b8479fdb6-5md2s,GenerateName:test-rollover-deployment-5b8479fdb6-,Namespace:e2e-tests-deployment-p7smt,SelfLink:/api/v1/namespaces/e2e-tests-deployment-p7smt/pods/test-rollover-deployment-5b8479fdb6-5md2s,UID:68fa9d17-5184-11ea-a994-fa163e34d433,ResourceVersion:21982022,Generation:0,CreationTimestamp:2020-02-17 12:52:52 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rollover-deployment-5b8479fdb6 688a1269-5184-11ea-a994-fa163e34d433 0xc001eb3097 0xc001eb3098}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-tqz9l {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-tqz9l,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [{default-token-tqz9l true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001eb3100} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001eb3120}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-17 12:52:53 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-17 12:53:01 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-17 12:53:01 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-17 12:52:52 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.5,StartTime:2020-02-17 12:52:53 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-02-17 12:53:00 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 docker-pullable://gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 docker://49f38e0a1bfbb544bfbc4b0ecc2370eb9eb11ce4cdc23b5150f2427b84244869}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 17 12:53:14.153: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-deployment-p7smt" for this suite.
Feb 17 12:53:22.197: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 17 12:53:22.638: INFO: namespace: e2e-tests-deployment-p7smt, resource: bindings, ignored listing per whitelist
Feb 17 12:53:22.706: INFO: namespace e2e-tests-deployment-p7smt deletion completed in 8.548236002s

• [SLOW TEST:45.591 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 17 12:53:22.707: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward api env vars
Feb 17 12:53:23.015: INFO: Waiting up to 5m0s for pod "downward-api-7b0931de-5184-11ea-a180-0242ac110008" in namespace "e2e-tests-downward-api-gplhx" to be "success or failure"
Feb 17 12:53:23.047: INFO: Pod "downward-api-7b0931de-5184-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 32.272335ms
Feb 17 12:53:25.121: INFO: Pod "downward-api-7b0931de-5184-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.106515343s
Feb 17 12:53:27.143: INFO: Pod "downward-api-7b0931de-5184-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.128252197s
Feb 17 12:53:29.168: INFO: Pod "downward-api-7b0931de-5184-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.152741838s
Feb 17 12:53:31.245: INFO: Pod "downward-api-7b0931de-5184-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.230563545s
Feb 17 12:53:33.260: INFO: Pod "downward-api-7b0931de-5184-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 10.24527626s
Feb 17 12:53:35.361: INFO: Pod "downward-api-7b0931de-5184-11ea-a180-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.346467137s
STEP: Saw pod success
Feb 17 12:53:35.361: INFO: Pod "downward-api-7b0931de-5184-11ea-a180-0242ac110008" satisfied condition "success or failure"
Feb 17 12:53:35.386: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downward-api-7b0931de-5184-11ea-a180-0242ac110008 container dapi-container: 
STEP: delete the pod
Feb 17 12:53:35.609: INFO: Waiting for pod downward-api-7b0931de-5184-11ea-a180-0242ac110008 to disappear
Feb 17 12:53:35.630: INFO: Pod downward-api-7b0931de-5184-11ea-a180-0242ac110008 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 17 12:53:35.632: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-gplhx" for this suite.
Feb 17 12:53:41.762: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 17 12:53:41.967: INFO: namespace: e2e-tests-downward-api-gplhx, resource: bindings, ignored listing per whitelist
Feb 17 12:53:42.113: INFO: namespace e2e-tests-downward-api-gplhx deletion completed in 6.411659145s

• [SLOW TEST:19.406 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38
  should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 17 12:53:42.113: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-volume-map-8692ef98-5184-11ea-a180-0242ac110008
STEP: Creating a pod to test consume configMaps
Feb 17 12:53:42.576: INFO: Waiting up to 5m0s for pod "pod-configmaps-86aac0fe-5184-11ea-a180-0242ac110008" in namespace "e2e-tests-configmap-rt742" to be "success or failure"
Feb 17 12:53:42.665: INFO: Pod "pod-configmaps-86aac0fe-5184-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 88.43674ms
Feb 17 12:53:44.726: INFO: Pod "pod-configmaps-86aac0fe-5184-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.148986352s
Feb 17 12:53:46.758: INFO: Pod "pod-configmaps-86aac0fe-5184-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.181525138s
Feb 17 12:53:49.190: INFO: Pod "pod-configmaps-86aac0fe-5184-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.613239795s
Feb 17 12:53:51.202: INFO: Pod "pod-configmaps-86aac0fe-5184-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.6250454s
Feb 17 12:53:53.215: INFO: Pod "pod-configmaps-86aac0fe-5184-11ea-a180-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.637833679s
STEP: Saw pod success
Feb 17 12:53:53.215: INFO: Pod "pod-configmaps-86aac0fe-5184-11ea-a180-0242ac110008" satisfied condition "success or failure"
Feb 17 12:53:53.218: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-86aac0fe-5184-11ea-a180-0242ac110008 container configmap-volume-test: 
STEP: delete the pod
Feb 17 12:53:53.526: INFO: Waiting for pod pod-configmaps-86aac0fe-5184-11ea-a180-0242ac110008 to disappear
Feb 17 12:53:53.574: INFO: Pod pod-configmaps-86aac0fe-5184-11ea-a180-0242ac110008 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 17 12:53:53.575: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-rt742" for this suite.
Feb 17 12:54:00.733: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 17 12:54:00.809: INFO: namespace: e2e-tests-configmap-rt742, resource: bindings, ignored listing per whitelist
Feb 17 12:54:00.912: INFO: namespace e2e-tests-configmap-rt742 deletion completed in 7.307839367s

• [SLOW TEST:18.799 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[k8s.io] Pods 
  should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 17 12:54:00.913: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Feb 17 12:54:01.211: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating the pod
STEP: submitting the pod to kubernetes
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 17 12:54:11.830: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-rcdzc" for this suite.
Feb 17 12:55:01.959: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 17 12:55:02.157: INFO: namespace: e2e-tests-pods-rcdzc, resource: bindings, ignored listing per whitelist
Feb 17 12:55:02.205: INFO: namespace e2e-tests-pods-rcdzc deletion completed in 50.362905501s

• [SLOW TEST:61.292 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-node] ConfigMap 
  should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 17 12:55:02.205: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap e2e-tests-configmap-rj8th/configmap-test-b65bf651-5184-11ea-a180-0242ac110008
STEP: Creating a pod to test consume configMaps
Feb 17 12:55:02.589: INFO: Waiting up to 5m0s for pod "pod-configmaps-b65f698a-5184-11ea-a180-0242ac110008" in namespace "e2e-tests-configmap-rj8th" to be "success or failure"
Feb 17 12:55:02.885: INFO: Pod "pod-configmaps-b65f698a-5184-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 296.40177ms
Feb 17 12:55:05.396: INFO: Pod "pod-configmaps-b65f698a-5184-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.806784358s
Feb 17 12:55:07.485: INFO: Pod "pod-configmaps-b65f698a-5184-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.896375373s
Feb 17 12:55:09.504: INFO: Pod "pod-configmaps-b65f698a-5184-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.915101198s
Feb 17 12:55:11.518: INFO: Pod "pod-configmaps-b65f698a-5184-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.9288991s
Feb 17 12:55:14.369: INFO: Pod "pod-configmaps-b65f698a-5184-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 11.780251615s
Feb 17 12:55:16.393: INFO: Pod "pod-configmaps-b65f698a-5184-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 13.804148801s
Feb 17 12:55:18.403: INFO: Pod "pod-configmaps-b65f698a-5184-11ea-a180-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 15.814073101s
STEP: Saw pod success
Feb 17 12:55:18.403: INFO: Pod "pod-configmaps-b65f698a-5184-11ea-a180-0242ac110008" satisfied condition "success or failure"
Feb 17 12:55:18.408: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-b65f698a-5184-11ea-a180-0242ac110008 container env-test: 
STEP: delete the pod
Feb 17 12:55:18.631: INFO: Waiting for pod pod-configmaps-b65f698a-5184-11ea-a180-0242ac110008 to disappear
Feb 17 12:55:18.730: INFO: Pod pod-configmaps-b65f698a-5184-11ea-a180-0242ac110008 no longer exists
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 17 12:55:18.730: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-rj8th" for this suite.
Feb 17 12:55:26.868: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 17 12:55:27.115: INFO: namespace: e2e-tests-configmap-rj8th, resource: bindings, ignored listing per whitelist
Feb 17 12:55:27.188: INFO: namespace e2e-tests-configmap-rj8th deletion completed in 8.439018661s

• [SLOW TEST:24.984 seconds]
[sig-node] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31
  should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 17 12:55:27.190: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Feb 17 12:55:27.731: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c55c4605-5184-11ea-a180-0242ac110008" in namespace "e2e-tests-downward-api-md4hh" to be "success or failure"
Feb 17 12:55:27.743: INFO: Pod "downwardapi-volume-c55c4605-5184-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 12.187852ms
Feb 17 12:55:30.112: INFO: Pod "downwardapi-volume-c55c4605-5184-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.380924219s
Feb 17 12:55:32.332: INFO: Pod "downwardapi-volume-c55c4605-5184-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.600800433s
Feb 17 12:55:34.348: INFO: Pod "downwardapi-volume-c55c4605-5184-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.616640353s
Feb 17 12:55:36.505: INFO: Pod "downwardapi-volume-c55c4605-5184-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.774518141s
Feb 17 12:55:38.556: INFO: Pod "downwardapi-volume-c55c4605-5184-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 10.825480764s
Feb 17 12:55:40.590: INFO: Pod "downwardapi-volume-c55c4605-5184-11ea-a180-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.859479479s
STEP: Saw pod success
Feb 17 12:55:40.591: INFO: Pod "downwardapi-volume-c55c4605-5184-11ea-a180-0242ac110008" satisfied condition "success or failure"
Feb 17 12:55:40.609: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-c55c4605-5184-11ea-a180-0242ac110008 container client-container: 
STEP: delete the pod
Feb 17 12:55:40.880: INFO: Waiting for pod downwardapi-volume-c55c4605-5184-11ea-a180-0242ac110008 to disappear
Feb 17 12:55:40.949: INFO: Pod downwardapi-volume-c55c4605-5184-11ea-a180-0242ac110008 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 17 12:55:40.950: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-md4hh" for this suite.
Feb 17 12:55:47.111: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 17 12:55:47.477: INFO: namespace: e2e-tests-downward-api-md4hh, resource: bindings, ignored listing per whitelist
Feb 17 12:55:47.483: INFO: namespace e2e-tests-downward-api-md4hh deletion completed in 6.454067882s

• [SLOW TEST:20.294 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-apps] ReplicationController 
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 17 12:55:47.484: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating replication controller my-hostname-basic-d1473774-5184-11ea-a180-0242ac110008
Feb 17 12:55:47.723: INFO: Pod name my-hostname-basic-d1473774-5184-11ea-a180-0242ac110008: Found 0 pods out of 1
Feb 17 12:55:53.024: INFO: Pod name my-hostname-basic-d1473774-5184-11ea-a180-0242ac110008: Found 1 pods out of 1
Feb 17 12:55:53.024: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-d1473774-5184-11ea-a180-0242ac110008" are running
Feb 17 12:55:59.748: INFO: Pod "my-hostname-basic-d1473774-5184-11ea-a180-0242ac110008-mz5pj" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-17 12:55:47 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-17 12:55:47 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-d1473774-5184-11ea-a180-0242ac110008]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-17 12:55:47 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-d1473774-5184-11ea-a180-0242ac110008]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-17 12:55:47 +0000 UTC Reason: Message:}])
Feb 17 12:55:59.749: INFO: Trying to dial the pod
Feb 17 12:56:04.805: INFO: Controller my-hostname-basic-d1473774-5184-11ea-a180-0242ac110008: Got expected result from replica 1 [my-hostname-basic-d1473774-5184-11ea-a180-0242ac110008-mz5pj]: "my-hostname-basic-d1473774-5184-11ea-a180-0242ac110008-mz5pj", 1 of 1 required successes so far
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 17 12:56:04.805: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-replication-controller-72pt8" for this suite.
Feb 17 12:56:12.935: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 17 12:56:13.917: INFO: namespace: e2e-tests-replication-controller-72pt8, resource: bindings, ignored listing per whitelist
Feb 17 12:56:14.624: INFO: namespace e2e-tests-replication-controller-72pt8 deletion completed in 9.811888541s

• [SLOW TEST:27.141 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 17 12:56:14.625: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-volume-map-e1825086-5184-11ea-a180-0242ac110008
STEP: Creating a pod to test consume configMaps
Feb 17 12:56:15.070: INFO: Waiting up to 5m0s for pod "pod-configmaps-e1837935-5184-11ea-a180-0242ac110008" in namespace "e2e-tests-configmap-ktvdd" to be "success or failure"
Feb 17 12:56:15.077: INFO: Pod "pod-configmaps-e1837935-5184-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.749151ms
Feb 17 12:56:17.090: INFO: Pod "pod-configmaps-e1837935-5184-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019800214s
Feb 17 12:56:19.402: INFO: Pod "pod-configmaps-e1837935-5184-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.331109173s
Feb 17 12:56:21.447: INFO: Pod "pod-configmaps-e1837935-5184-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.376845599s
Feb 17 12:56:25.118: INFO: Pod "pod-configmaps-e1837935-5184-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 10.047039517s
Feb 17 12:56:27.133: INFO: Pod "pod-configmaps-e1837935-5184-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 12.062969545s
Feb 17 12:56:29.145: INFO: Pod "pod-configmaps-e1837935-5184-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 14.074471832s
Feb 17 12:56:31.218: INFO: Pod "pod-configmaps-e1837935-5184-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 16.147531119s
Feb 17 12:56:33.234: INFO: Pod "pod-configmaps-e1837935-5184-11ea-a180-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 18.163586198s
STEP: Saw pod success
Feb 17 12:56:33.234: INFO: Pod "pod-configmaps-e1837935-5184-11ea-a180-0242ac110008" satisfied condition "success or failure"
Feb 17 12:56:33.239: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-e1837935-5184-11ea-a180-0242ac110008 container configmap-volume-test: 
STEP: delete the pod
Feb 17 12:56:34.788: INFO: Waiting for pod pod-configmaps-e1837935-5184-11ea-a180-0242ac110008 to disappear
Feb 17 12:56:34.801: INFO: Pod pod-configmaps-e1837935-5184-11ea-a180-0242ac110008 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 17 12:56:34.801: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-ktvdd" for this suite.
Feb 17 12:56:43.386: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 17 12:56:43.636: INFO: namespace: e2e-tests-configmap-ktvdd, resource: bindings, ignored listing per whitelist
Feb 17 12:56:43.663: INFO: namespace e2e-tests-configmap-ktvdd deletion completed in 8.852121315s

• [SLOW TEST:29.038 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 17 12:56:43.663: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating projection with secret that has name projected-secret-test-f2dca676-5184-11ea-a180-0242ac110008
STEP: Creating a pod to test consume secrets
Feb 17 12:56:44.083: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-f2e141e1-5184-11ea-a180-0242ac110008" in namespace "e2e-tests-projected-4ltfs" to be "success or failure"
Feb 17 12:56:44.267: INFO: Pod "pod-projected-secrets-f2e141e1-5184-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 183.672635ms
Feb 17 12:56:46.280: INFO: Pod "pod-projected-secrets-f2e141e1-5184-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.19629275s
Feb 17 12:56:48.411: INFO: Pod "pod-projected-secrets-f2e141e1-5184-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.327305014s
Feb 17 12:56:52.572: INFO: Pod "pod-projected-secrets-f2e141e1-5184-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.488204756s
Feb 17 12:56:54.596: INFO: Pod "pod-projected-secrets-f2e141e1-5184-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 10.512699978s
Feb 17 12:56:56.621: INFO: Pod "pod-projected-secrets-f2e141e1-5184-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 12.536721283s
Feb 17 12:56:58.673: INFO: Pod "pod-projected-secrets-f2e141e1-5184-11ea-a180-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.589490319s
STEP: Saw pod success
Feb 17 12:56:58.674: INFO: Pod "pod-projected-secrets-f2e141e1-5184-11ea-a180-0242ac110008" satisfied condition "success or failure"
Feb 17 12:56:58.728: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-f2e141e1-5184-11ea-a180-0242ac110008 container projected-secret-volume-test: 
STEP: delete the pod
Feb 17 12:56:59.762: INFO: Waiting for pod pod-projected-secrets-f2e141e1-5184-11ea-a180-0242ac110008 to disappear
Feb 17 12:56:59.773: INFO: Pod pod-projected-secrets-f2e141e1-5184-11ea-a180-0242ac110008 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 17 12:56:59.773: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-4ltfs" for this suite.
Feb 17 12:57:05.946: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 17 12:57:06.047: INFO: namespace: e2e-tests-projected-4ltfs, resource: bindings, ignored listing per whitelist
Feb 17 12:57:06.074: INFO: namespace e2e-tests-projected-4ltfs deletion completed in 6.277245573s

• [SLOW TEST:22.411 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 17 12:57:06.075: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating the pod
Feb 17 12:57:21.314: INFO: Successfully updated pod "annotationupdate0044d5b4-5185-11ea-a180-0242ac110008"
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 17 12:57:23.519: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-9b8ns" for this suite.
Feb 17 12:57:47.615: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 17 12:57:47.895: INFO: namespace: e2e-tests-downward-api-9b8ns, resource: bindings, ignored listing per whitelist
Feb 17 12:57:47.945: INFO: namespace e2e-tests-downward-api-9b8ns deletion completed in 24.411410988s

• [SLOW TEST:41.871 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSS
------------------------------
[k8s.io] [sig-node] PreStop 
  should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] [sig-node] PreStop
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 17 12:57:47.947: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename prestop
STEP: Waiting for a default service account to be provisioned in namespace
[It] should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating server pod server in namespace e2e-tests-prestop-mldzz
STEP: Waiting for pods to come up.
STEP: Creating tester pod tester in namespace e2e-tests-prestop-mldzz
STEP: Deleting pre-stop pod
Feb 17 12:58:20.043: INFO: Saw: {
	"Hostname": "server",
	"Sent": null,
	"Received": {
		"prestop": 1
	},
	"Errors": null,
	"Log": [
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up."
	],
	"StillContactingPeers": true
}
STEP: Deleting the server pod
[AfterEach] [k8s.io] [sig-node] PreStop
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 17 12:58:20.098: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-prestop-mldzz" for this suite.
Feb 17 12:59:04.253: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 17 12:59:04.314: INFO: namespace: e2e-tests-prestop-mldzz, resource: bindings, ignored listing per whitelist
Feb 17 12:59:04.360: INFO: namespace e2e-tests-prestop-mldzz deletion completed in 44.248411251s

• [SLOW TEST:76.413 seconds]
[k8s.io] [sig-node] PreStop
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-node] Downward API 
  should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 17 12:59:04.360: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward api env vars
Feb 17 12:59:04.600: INFO: Waiting up to 5m0s for pod "downward-api-469ff4dc-5185-11ea-a180-0242ac110008" in namespace "e2e-tests-downward-api-drqw5" to be "success or failure"
Feb 17 12:59:04.608: INFO: Pod "downward-api-469ff4dc-5185-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.258506ms
Feb 17 12:59:06.633: INFO: Pod "downward-api-469ff4dc-5185-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032868093s
Feb 17 12:59:08.659: INFO: Pod "downward-api-469ff4dc-5185-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.05921533s
Feb 17 12:59:11.628: INFO: Pod "downward-api-469ff4dc-5185-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 7.027707941s
Feb 17 12:59:13.647: INFO: Pod "downward-api-469ff4dc-5185-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 9.046899236s
Feb 17 12:59:15.695: INFO: Pod "downward-api-469ff4dc-5185-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 11.094739354s
Feb 17 12:59:17.754: INFO: Pod "downward-api-469ff4dc-5185-11ea-a180-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.153889875s
STEP: Saw pod success
Feb 17 12:59:17.754: INFO: Pod "downward-api-469ff4dc-5185-11ea-a180-0242ac110008" satisfied condition "success or failure"
Feb 17 12:59:17.764: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downward-api-469ff4dc-5185-11ea-a180-0242ac110008 container dapi-container: 
STEP: delete the pod
Feb 17 12:59:17.913: INFO: Waiting for pod downward-api-469ff4dc-5185-11ea-a180-0242ac110008 to disappear
Feb 17 12:59:17.947: INFO: Pod downward-api-469ff4dc-5185-11ea-a180-0242ac110008 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 17 12:59:17.947: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-drqw5" for this suite.
Feb 17 12:59:24.022: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 17 12:59:24.111: INFO: namespace: e2e-tests-downward-api-drqw5, resource: bindings, ignored listing per whitelist
Feb 17 12:59:24.201: INFO: namespace e2e-tests-downward-api-drqw5 deletion completed in 6.237737107s

• [SLOW TEST:19.841 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38
  should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should have monotonically increasing restart count [Slow][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 17 12:59:24.202: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] should have monotonically increasing restart count [Slow][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-jgq2d
Feb 17 12:59:34.531: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-jgq2d
STEP: checking the pod's current state and verifying that restartCount is present
Feb 17 12:59:34.542: INFO: Initial restart count of pod liveness-http is 0
Feb 17 12:59:52.914: INFO: Restart count of pod e2e-tests-container-probe-jgq2d/liveness-http is now 1 (18.371501792s elapsed)
Feb 17 13:00:11.586: INFO: Restart count of pod e2e-tests-container-probe-jgq2d/liveness-http is now 2 (37.044131682s elapsed)
Feb 17 13:00:32.206: INFO: Restart count of pod e2e-tests-container-probe-jgq2d/liveness-http is now 3 (57.66363288s elapsed)
Feb 17 13:00:48.416: INFO: Restart count of pod e2e-tests-container-probe-jgq2d/liveness-http is now 4 (1m13.874007002s elapsed)
Feb 17 13:01:56.869: INFO: Restart count of pod e2e-tests-container-probe-jgq2d/liveness-http is now 5 (2m22.327090124s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 17 13:01:58.105: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-jgq2d" for this suite.
Feb 17 13:02:07.160: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 17 13:02:07.238: INFO: namespace: e2e-tests-container-probe-jgq2d, resource: bindings, ignored listing per whitelist
Feb 17 13:02:07.286: INFO: namespace e2e-tests-container-probe-jgq2d deletion completed in 8.202241346s

• [SLOW TEST:163.084 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should have monotonically increasing restart count [Slow][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 17 13:02:07.287: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61
STEP: create the container to handle the HTTPGet hook request.
[It] should execute poststart http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the pod with lifecycle hook
STEP: check poststart hook
STEP: delete the pod with lifecycle hook
Feb 17 13:02:27.633: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Feb 17 13:02:27.649: INFO: Pod pod-with-poststart-http-hook still exists
Feb 17 13:02:29.649: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Feb 17 13:02:29.668: INFO: Pod pod-with-poststart-http-hook still exists
Feb 17 13:02:31.649: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Feb 17 13:02:31.670: INFO: Pod pod-with-poststart-http-hook still exists
Feb 17 13:02:33.650: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Feb 17 13:02:33.670: INFO: Pod pod-with-poststart-http-hook still exists
Feb 17 13:02:35.650: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Feb 17 13:02:35.664: INFO: Pod pod-with-poststart-http-hook still exists
Feb 17 13:02:37.649: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Feb 17 13:02:37.673: INFO: Pod pod-with-poststart-http-hook still exists
Feb 17 13:02:39.649: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Feb 17 13:02:39.665: INFO: Pod pod-with-poststart-http-hook still exists
Feb 17 13:02:41.649: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Feb 17 13:02:41.662: INFO: Pod pod-with-poststart-http-hook still exists
Feb 17 13:02:43.649: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Feb 17 13:02:43.663: INFO: Pod pod-with-poststart-http-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 17 13:02:43.664: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-fzx85" for this suite.
Feb 17 13:03:07.810: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 17 13:03:07.946: INFO: namespace: e2e-tests-container-lifecycle-hook-fzx85, resource: bindings, ignored listing per whitelist
Feb 17 13:03:08.000: INFO: namespace e2e-tests-container-lifecycle-hook-fzx85 deletion completed in 24.325106488s

• [SLOW TEST:60.713 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40
    should execute poststart http hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 17 13:03:08.001: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-d7e1ea71-5185-11ea-a180-0242ac110008
STEP: Creating a pod to test consume configMaps
Feb 17 13:03:08.292: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-d7e3ba9c-5185-11ea-a180-0242ac110008" in namespace "e2e-tests-projected-ntb25" to be "success or failure"
Feb 17 13:03:08.544: INFO: Pod "pod-projected-configmaps-d7e3ba9c-5185-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 251.49199ms
Feb 17 13:03:11.159: INFO: Pod "pod-projected-configmaps-d7e3ba9c-5185-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.867112723s
Feb 17 13:03:13.193: INFO: Pod "pod-projected-configmaps-d7e3ba9c-5185-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.900948711s
Feb 17 13:03:15.202: INFO: Pod "pod-projected-configmaps-d7e3ba9c-5185-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.909613109s
Feb 17 13:03:18.306: INFO: Pod "pod-projected-configmaps-d7e3ba9c-5185-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 10.014000936s
Feb 17 13:03:20.322: INFO: Pod "pod-projected-configmaps-d7e3ba9c-5185-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 12.029399999s
Feb 17 13:03:22.542: INFO: Pod "pod-projected-configmaps-d7e3ba9c-5185-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 14.249890255s
Feb 17 13:03:26.093: INFO: Pod "pod-projected-configmaps-d7e3ba9c-5185-11ea-a180-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 17.800869804s
STEP: Saw pod success
Feb 17 13:03:26.093: INFO: Pod "pod-projected-configmaps-d7e3ba9c-5185-11ea-a180-0242ac110008" satisfied condition "success or failure"
Feb 17 13:03:26.103: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-d7e3ba9c-5185-11ea-a180-0242ac110008 container projected-configmap-volume-test: 
STEP: delete the pod
Feb 17 13:03:26.652: INFO: Waiting for pod pod-projected-configmaps-d7e3ba9c-5185-11ea-a180-0242ac110008 to disappear
Feb 17 13:03:26.681: INFO: Pod pod-projected-configmaps-d7e3ba9c-5185-11ea-a180-0242ac110008 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 17 13:03:26.682: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-ntb25" for this suite.
Feb 17 13:03:32.883: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 17 13:03:33.109: INFO: namespace: e2e-tests-projected-ntb25, resource: bindings, ignored listing per whitelist
Feb 17 13:03:33.141: INFO: namespace e2e-tests-projected-ntb25 deletion completed in 6.44032192s

• [SLOW TEST:25.141 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0644,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 17 13:03:33.142: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0644 on node default medium
Feb 17 13:03:33.547: INFO: Waiting up to 5m0s for pod "pod-e6e1e858-5185-11ea-a180-0242ac110008" in namespace "e2e-tests-emptydir-wmnps" to be "success or failure"
Feb 17 13:03:33.576: INFO: Pod "pod-e6e1e858-5185-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 28.798304ms
Feb 17 13:03:36.699: INFO: Pod "pod-e6e1e858-5185-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 3.151697531s
Feb 17 13:03:38.718: INFO: Pod "pod-e6e1e858-5185-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 5.170764001s
Feb 17 13:03:40.763: INFO: Pod "pod-e6e1e858-5185-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 7.215269081s
Feb 17 13:03:43.562: INFO: Pod "pod-e6e1e858-5185-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 10.014139555s
Feb 17 13:03:45.683: INFO: Pod "pod-e6e1e858-5185-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 12.135606715s
Feb 17 13:03:47.764: INFO: Pod "pod-e6e1e858-5185-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 14.216558902s
Feb 17 13:03:49.780: INFO: Pod "pod-e6e1e858-5185-11ea-a180-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.232033753s
STEP: Saw pod success
Feb 17 13:03:49.780: INFO: Pod "pod-e6e1e858-5185-11ea-a180-0242ac110008" satisfied condition "success or failure"
Feb 17 13:03:49.785: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-e6e1e858-5185-11ea-a180-0242ac110008 container test-container: 
STEP: delete the pod
Feb 17 13:03:50.245: INFO: Waiting for pod pod-e6e1e858-5185-11ea-a180-0242ac110008 to disappear
Feb 17 13:03:50.251: INFO: Pod pod-e6e1e858-5185-11ea-a180-0242ac110008 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 17 13:03:50.252: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-wmnps" for this suite.
Feb 17 13:03:56.490: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 17 13:03:56.748: INFO: namespace: e2e-tests-emptydir-wmnps, resource: bindings, ignored listing per whitelist
Feb 17 13:03:56.812: INFO: namespace e2e-tests-emptydir-wmnps deletion completed in 6.49554175s

• [SLOW TEST:23.671 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0644,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with configmap pod with mountPath of existing file [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 17 13:03:56.814: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with configmap pod with mountPath of existing file [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod pod-subpath-test-configmap-swns
STEP: Creating a pod to test atomic-volume-subpath
Feb 17 13:03:57.114: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-swns" in namespace "e2e-tests-subpath-wn7hj" to be "success or failure"
Feb 17 13:03:57.124: INFO: Pod "pod-subpath-test-configmap-swns": Phase="Pending", Reason="", readiness=false. Elapsed: 9.436667ms
Feb 17 13:03:59.258: INFO: Pod "pod-subpath-test-configmap-swns": Phase="Pending", Reason="", readiness=false. Elapsed: 2.143857285s
Feb 17 13:04:01.293: INFO: Pod "pod-subpath-test-configmap-swns": Phase="Pending", Reason="", readiness=false. Elapsed: 4.178911451s
Feb 17 13:04:04.205: INFO: Pod "pod-subpath-test-configmap-swns": Phase="Pending", Reason="", readiness=false. Elapsed: 7.090676607s
Feb 17 13:04:06.226: INFO: Pod "pod-subpath-test-configmap-swns": Phase="Pending", Reason="", readiness=false. Elapsed: 9.111056155s
Feb 17 13:04:08.245: INFO: Pod "pod-subpath-test-configmap-swns": Phase="Pending", Reason="", readiness=false. Elapsed: 11.129981315s
Feb 17 13:04:10.262: INFO: Pod "pod-subpath-test-configmap-swns": Phase="Pending", Reason="", readiness=false. Elapsed: 13.147668031s
Feb 17 13:04:12.274: INFO: Pod "pod-subpath-test-configmap-swns": Phase="Pending", Reason="", readiness=false. Elapsed: 15.159569003s
Feb 17 13:04:14.539: INFO: Pod "pod-subpath-test-configmap-swns": Phase="Pending", Reason="", readiness=false. Elapsed: 17.424805469s
Feb 17 13:04:16.572: INFO: Pod "pod-subpath-test-configmap-swns": Phase="Pending", Reason="", readiness=false. Elapsed: 19.457636054s
Feb 17 13:04:18.595: INFO: Pod "pod-subpath-test-configmap-swns": Phase="Running", Reason="", readiness=false. Elapsed: 21.480493815s
Feb 17 13:04:20.620: INFO: Pod "pod-subpath-test-configmap-swns": Phase="Running", Reason="", readiness=false. Elapsed: 23.505248548s
Feb 17 13:04:22.658: INFO: Pod "pod-subpath-test-configmap-swns": Phase="Running", Reason="", readiness=false. Elapsed: 25.543629595s
Feb 17 13:04:24.674: INFO: Pod "pod-subpath-test-configmap-swns": Phase="Running", Reason="", readiness=false. Elapsed: 27.559302831s
Feb 17 13:04:26.715: INFO: Pod "pod-subpath-test-configmap-swns": Phase="Running", Reason="", readiness=false. Elapsed: 29.600116296s
Feb 17 13:04:28.750: INFO: Pod "pod-subpath-test-configmap-swns": Phase="Running", Reason="", readiness=false. Elapsed: 31.635767952s
Feb 17 13:04:30.764: INFO: Pod "pod-subpath-test-configmap-swns": Phase="Running", Reason="", readiness=false. Elapsed: 33.649910974s
Feb 17 13:04:32.781: INFO: Pod "pod-subpath-test-configmap-swns": Phase="Running", Reason="", readiness=false. Elapsed: 35.666764186s
Feb 17 13:04:34.801: INFO: Pod "pod-subpath-test-configmap-swns": Phase="Running", Reason="", readiness=false. Elapsed: 37.686783132s
Feb 17 13:04:36.815: INFO: Pod "pod-subpath-test-configmap-swns": Phase="Succeeded", Reason="", readiness=false. Elapsed: 39.700458281s
STEP: Saw pod success
Feb 17 13:04:36.815: INFO: Pod "pod-subpath-test-configmap-swns" satisfied condition "success or failure"
Feb 17 13:04:36.823: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-subpath-test-configmap-swns container test-container-subpath-configmap-swns: 
STEP: delete the pod
Feb 17 13:04:37.536: INFO: Waiting for pod pod-subpath-test-configmap-swns to disappear
Feb 17 13:04:37.822: INFO: Pod pod-subpath-test-configmap-swns no longer exists
STEP: Deleting pod pod-subpath-test-configmap-swns
Feb 17 13:04:37.823: INFO: Deleting pod "pod-subpath-test-configmap-swns" in namespace "e2e-tests-subpath-wn7hj"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 17 13:04:37.836: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-subpath-wn7hj" for this suite.
Feb 17 13:04:45.886: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 17 13:04:45.983: INFO: namespace: e2e-tests-subpath-wn7hj, resource: bindings, ignored listing per whitelist
Feb 17 13:04:46.195: INFO: namespace e2e-tests-subpath-wn7hj deletion completed in 8.348051367s

• [SLOW TEST:49.381 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with configmap pod with mountPath of existing file [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with projected pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 17 13:04:46.196: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with projected pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod pod-subpath-test-projected-v5kz
STEP: Creating a pod to test atomic-volume-subpath
Feb 17 13:04:46.933: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-v5kz" in namespace "e2e-tests-subpath-plzbz" to be "success or failure"
Feb 17 13:04:46.984: INFO: Pod "pod-subpath-test-projected-v5kz": Phase="Pending", Reason="", readiness=false. Elapsed: 50.979258ms
Feb 17 13:04:49.783: INFO: Pod "pod-subpath-test-projected-v5kz": Phase="Pending", Reason="", readiness=false. Elapsed: 2.849793162s
Feb 17 13:04:51.809: INFO: Pod "pod-subpath-test-projected-v5kz": Phase="Pending", Reason="", readiness=false. Elapsed: 4.875560035s
Feb 17 13:04:53.839: INFO: Pod "pod-subpath-test-projected-v5kz": Phase="Pending", Reason="", readiness=false. Elapsed: 6.90543967s
Feb 17 13:04:55.854: INFO: Pod "pod-subpath-test-projected-v5kz": Phase="Pending", Reason="", readiness=false. Elapsed: 8.920532308s
Feb 17 13:04:57.868: INFO: Pod "pod-subpath-test-projected-v5kz": Phase="Pending", Reason="", readiness=false. Elapsed: 10.934203095s
Feb 17 13:04:59.907: INFO: Pod "pod-subpath-test-projected-v5kz": Phase="Pending", Reason="", readiness=false. Elapsed: 12.973740619s
Feb 17 13:05:01.917: INFO: Pod "pod-subpath-test-projected-v5kz": Phase="Pending", Reason="", readiness=false. Elapsed: 14.98388743s
Feb 17 13:05:03.950: INFO: Pod "pod-subpath-test-projected-v5kz": Phase="Pending", Reason="", readiness=false. Elapsed: 17.016339958s
Feb 17 13:05:05.964: INFO: Pod "pod-subpath-test-projected-v5kz": Phase="Running", Reason="", readiness=false. Elapsed: 19.030774717s
Feb 17 13:05:07.986: INFO: Pod "pod-subpath-test-projected-v5kz": Phase="Running", Reason="", readiness=false. Elapsed: 21.052976983s
Feb 17 13:05:10.024: INFO: Pod "pod-subpath-test-projected-v5kz": Phase="Running", Reason="", readiness=false. Elapsed: 23.090417126s
Feb 17 13:05:12.037: INFO: Pod "pod-subpath-test-projected-v5kz": Phase="Running", Reason="", readiness=false. Elapsed: 25.103285511s
Feb 17 13:05:14.050: INFO: Pod "pod-subpath-test-projected-v5kz": Phase="Running", Reason="", readiness=false. Elapsed: 27.116942805s
Feb 17 13:05:16.062: INFO: Pod "pod-subpath-test-projected-v5kz": Phase="Running", Reason="", readiness=false. Elapsed: 29.128402536s
Feb 17 13:05:18.075: INFO: Pod "pod-subpath-test-projected-v5kz": Phase="Running", Reason="", readiness=false. Elapsed: 31.141423678s
Feb 17 13:05:20.091: INFO: Pod "pod-subpath-test-projected-v5kz": Phase="Running", Reason="", readiness=false. Elapsed: 33.157988897s
Feb 17 13:05:22.113: INFO: Pod "pod-subpath-test-projected-v5kz": Phase="Running", Reason="", readiness=false. Elapsed: 35.179747921s
Feb 17 13:05:24.175: INFO: Pod "pod-subpath-test-projected-v5kz": Phase="Succeeded", Reason="", readiness=false. Elapsed: 37.241257128s
STEP: Saw pod success
Feb 17 13:05:24.175: INFO: Pod "pod-subpath-test-projected-v5kz" satisfied condition "success or failure"
Feb 17 13:05:24.184: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-subpath-test-projected-v5kz container test-container-subpath-projected-v5kz: 
STEP: delete the pod
Feb 17 13:05:24.621: INFO: Waiting for pod pod-subpath-test-projected-v5kz to disappear
Feb 17 13:05:24.630: INFO: Pod pod-subpath-test-projected-v5kz no longer exists
STEP: Deleting pod pod-subpath-test-projected-v5kz
Feb 17 13:05:24.630: INFO: Deleting pod "pod-subpath-test-projected-v5kz" in namespace "e2e-tests-subpath-plzbz"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 17 13:05:24.634: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-subpath-plzbz" for this suite.
Feb 17 13:05:30.691: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 17 13:05:30.781: INFO: namespace: e2e-tests-subpath-plzbz, resource: bindings, ignored listing per whitelist
Feb 17 13:05:30.826: INFO: namespace e2e-tests-subpath-plzbz deletion completed in 6.177293942s

• [SLOW TEST:44.630 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with projected pod [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected combined 
  should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected combined
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 17 13:05:30.827: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-projected-all-test-volume-2d004ee8-5186-11ea-a180-0242ac110008
STEP: Creating secret with name secret-projected-all-test-volume-2d004ec7-5186-11ea-a180-0242ac110008
STEP: Creating a pod to test Check all projections for projected volume plugin
Feb 17 13:05:31.235: INFO: Waiting up to 5m0s for pod "projected-volume-2d004dda-5186-11ea-a180-0242ac110008" in namespace "e2e-tests-projected-8qhtv" to be "success or failure"
Feb 17 13:05:31.262: INFO: Pod "projected-volume-2d004dda-5186-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 27.021734ms
Feb 17 13:05:33.415: INFO: Pod "projected-volume-2d004dda-5186-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.179433055s
Feb 17 13:05:35.448: INFO: Pod "projected-volume-2d004dda-5186-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.213294457s
Feb 17 13:05:37.577: INFO: Pod "projected-volume-2d004dda-5186-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.34154035s
Feb 17 13:05:39.586: INFO: Pod "projected-volume-2d004dda-5186-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.350325325s
Feb 17 13:05:41.597: INFO: Pod "projected-volume-2d004dda-5186-11ea-a180-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.361478403s
STEP: Saw pod success
Feb 17 13:05:41.597: INFO: Pod "projected-volume-2d004dda-5186-11ea-a180-0242ac110008" satisfied condition "success or failure"
Feb 17 13:05:41.602: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod projected-volume-2d004dda-5186-11ea-a180-0242ac110008 container projected-all-volume-test: 
STEP: delete the pod
Feb 17 13:05:41.681: INFO: Waiting for pod projected-volume-2d004dda-5186-11ea-a180-0242ac110008 to disappear
Feb 17 13:05:42.673: INFO: Pod projected-volume-2d004dda-5186-11ea-a180-0242ac110008 no longer exists
[AfterEach] [sig-storage] Projected combined
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 17 13:05:42.673: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-8qhtv" for this suite.
Feb 17 13:05:50.976: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 17 13:05:51.254: INFO: namespace: e2e-tests-projected-8qhtv, resource: bindings, ignored listing per whitelist
Feb 17 13:05:51.294: INFO: namespace e2e-tests-projected-8qhtv deletion completed in 8.600565714s

• [SLOW TEST:20.467 seconds]
[sig-storage] Projected combined
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_combined.go:31
  should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-storage] Projected secret 
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 17 13:05:51.295: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name projected-secret-test-393d2a99-5186-11ea-a180-0242ac110008
STEP: Creating a pod to test consume secrets
Feb 17 13:05:51.637: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-393e9c42-5186-11ea-a180-0242ac110008" in namespace "e2e-tests-projected-dsdkg" to be "success or failure"
Feb 17 13:05:51.854: INFO: Pod "pod-projected-secrets-393e9c42-5186-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 216.405177ms
Feb 17 13:05:55.308: INFO: Pod "pod-projected-secrets-393e9c42-5186-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 3.670718622s
Feb 17 13:05:57.330: INFO: Pod "pod-projected-secrets-393e9c42-5186-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 5.692627606s
Feb 17 13:05:59.347: INFO: Pod "pod-projected-secrets-393e9c42-5186-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 7.710145905s
Feb 17 13:06:01.800: INFO: Pod "pod-projected-secrets-393e9c42-5186-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 10.162340907s
Feb 17 13:06:03.919: INFO: Pod "pod-projected-secrets-393e9c42-5186-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 12.281349269s
Feb 17 13:06:05.939: INFO: Pod "pod-projected-secrets-393e9c42-5186-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 14.301218503s
Feb 17 13:06:07.954: INFO: Pod "pod-projected-secrets-393e9c42-5186-11ea-a180-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.316291257s
STEP: Saw pod success
Feb 17 13:06:07.954: INFO: Pod "pod-projected-secrets-393e9c42-5186-11ea-a180-0242ac110008" satisfied condition "success or failure"
Feb 17 13:06:07.967: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-393e9c42-5186-11ea-a180-0242ac110008 container secret-volume-test: 
STEP: delete the pod
Feb 17 13:06:08.661: INFO: Waiting for pod pod-projected-secrets-393e9c42-5186-11ea-a180-0242ac110008 to disappear
Feb 17 13:06:08.667: INFO: Pod pod-projected-secrets-393e9c42-5186-11ea-a180-0242ac110008 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 17 13:06:08.667: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-dsdkg" for this suite.
Feb 17 13:06:14.727: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 17 13:06:14.982: INFO: namespace: e2e-tests-projected-dsdkg, resource: bindings, ignored listing per whitelist
Feb 17 13:06:15.009: INFO: namespace e2e-tests-projected-dsdkg deletion completed in 6.337125267s

• [SLOW TEST:23.715 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[k8s.io] Pods 
  should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 17 13:06:15.010: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Feb 17 13:06:15.211: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating the pod
STEP: submitting the pod to kubernetes
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 17 13:06:29.295: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-htwx5" for this suite.
Feb 17 13:07:15.364: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 17 13:07:15.498: INFO: namespace: e2e-tests-pods-htwx5, resource: bindings, ignored listing per whitelist
Feb 17 13:07:15.503: INFO: namespace e2e-tests-pods-htwx5 deletion completed in 46.201418426s

• [SLOW TEST:60.494 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 17 13:07:15.504: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-6b73c305-5186-11ea-a180-0242ac110008
STEP: Creating a pod to test consume secrets
Feb 17 13:07:16.055: INFO: Waiting up to 5m0s for pod "pod-secrets-6b753461-5186-11ea-a180-0242ac110008" in namespace "e2e-tests-secrets-zd8dp" to be "success or failure"
Feb 17 13:07:16.120: INFO: Pod "pod-secrets-6b753461-5186-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 65.094638ms
Feb 17 13:07:19.031: INFO: Pod "pod-secrets-6b753461-5186-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.975553604s
Feb 17 13:07:21.300: INFO: Pod "pod-secrets-6b753461-5186-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 5.244474227s
Feb 17 13:07:23.333: INFO: Pod "pod-secrets-6b753461-5186-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 7.277728447s
Feb 17 13:07:26.739: INFO: Pod "pod-secrets-6b753461-5186-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 10.684046616s
Feb 17 13:07:28.748: INFO: Pod "pod-secrets-6b753461-5186-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 12.693204358s
Feb 17 13:07:30.760: INFO: Pod "pod-secrets-6b753461-5186-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 14.705336958s
Feb 17 13:07:33.348: INFO: Pod "pod-secrets-6b753461-5186-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 17.292699596s
Feb 17 13:07:35.383: INFO: Pod "pod-secrets-6b753461-5186-11ea-a180-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 19.32815101s
STEP: Saw pod success
Feb 17 13:07:35.384: INFO: Pod "pod-secrets-6b753461-5186-11ea-a180-0242ac110008" satisfied condition "success or failure"
Feb 17 13:07:35.397: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-6b753461-5186-11ea-a180-0242ac110008 container secret-volume-test: 
STEP: delete the pod
Feb 17 13:07:35.773: INFO: Waiting for pod pod-secrets-6b753461-5186-11ea-a180-0242ac110008 to disappear
Feb 17 13:07:35.875: INFO: Pod pod-secrets-6b753461-5186-11ea-a180-0242ac110008 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 17 13:07:35.875: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-zd8dp" for this suite.
Feb 17 13:07:42.069: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 17 13:07:42.138: INFO: namespace: e2e-tests-secrets-zd8dp, resource: bindings, ignored listing per whitelist
Feb 17 13:07:42.186: INFO: namespace e2e-tests-secrets-zd8dp deletion completed in 6.279443379s

• [SLOW TEST:26.683 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 17 13:07:42.187: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43
[It] should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
Feb 17 13:07:42.664: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 17 13:08:18.258: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-init-container-cm8vl" for this suite.
Feb 17 13:08:26.771: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 17 13:08:26.880: INFO: namespace: e2e-tests-init-container-cm8vl, resource: bindings, ignored listing per whitelist
Feb 17 13:08:26.922: INFO: namespace e2e-tests-init-container-cm8vl deletion completed in 8.337646259s

• [SLOW TEST:44.735 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] HostPath 
  should give a volume the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 17 13:08:26.923: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename hostpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37
[It] should give a volume the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test hostPath mode
Feb 17 13:08:27.270: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "e2e-tests-hostpath-m77p4" to be "success or failure"
Feb 17 13:08:27.282: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 11.292245ms
Feb 17 13:08:29.565: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.294122118s
Feb 17 13:08:31.616: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.34509782s
Feb 17 13:08:33.626: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 6.355081994s
Feb 17 13:08:35.696: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 8.42567776s
Feb 17 13:08:37.708: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 10.437737296s
Feb 17 13:08:39.723: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 12.451873924s
Feb 17 13:08:43.423: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 16.152784041s
Feb 17 13:08:45.436: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 18.165372722s
Feb 17 13:08:47.534: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 20.263568935s
STEP: Saw pod success
Feb 17 13:08:47.535: INFO: Pod "pod-host-path-test" satisfied condition "success or failure"
Feb 17 13:08:47.551: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-host-path-test container test-container-1: 
STEP: delete the pod
Feb 17 13:08:47.848: INFO: Waiting for pod pod-host-path-test to disappear
Feb 17 13:08:47.859: INFO: Pod pod-host-path-test no longer exists
[AfterEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 17 13:08:47.860: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-hostpath-m77p4" for this suite.
Feb 17 13:08:56.008: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 17 13:08:56.392: INFO: namespace: e2e-tests-hostpath-m77p4, resource: bindings, ignored listing per whitelist
Feb 17 13:08:56.540: INFO: namespace e2e-tests-hostpath-m77p4 deletion completed in 8.666984194s

• [SLOW TEST:29.616 seconds]
[sig-storage] HostPath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34
  should give a volume the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 17 13:08:56.541: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test override all
Feb 17 13:08:56.787: INFO: Waiting up to 5m0s for pod "client-containers-a79be95d-5186-11ea-a180-0242ac110008" in namespace "e2e-tests-containers-9kqlh" to be "success or failure"
Feb 17 13:08:56.796: INFO: Pod "client-containers-a79be95d-5186-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.630232ms
Feb 17 13:08:58.804: INFO: Pod "client-containers-a79be95d-5186-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017536124s
Feb 17 13:09:00.828: INFO: Pod "client-containers-a79be95d-5186-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.041568174s
Feb 17 13:09:03.515: INFO: Pod "client-containers-a79be95d-5186-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.728216065s
Feb 17 13:09:05.525: INFO: Pod "client-containers-a79be95d-5186-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.738368508s
Feb 17 13:09:07.553: INFO: Pod "client-containers-a79be95d-5186-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 10.765762024s
Feb 17 13:09:09.576: INFO: Pod "client-containers-a79be95d-5186-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 12.789382645s
Feb 17 13:09:11.593: INFO: Pod "client-containers-a79be95d-5186-11ea-a180-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.805648923s
STEP: Saw pod success
Feb 17 13:09:11.593: INFO: Pod "client-containers-a79be95d-5186-11ea-a180-0242ac110008" satisfied condition "success or failure"
Feb 17 13:09:11.600: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod client-containers-a79be95d-5186-11ea-a180-0242ac110008 container test-container: 
STEP: delete the pod
Feb 17 13:09:13.341: INFO: Waiting for pod client-containers-a79be95d-5186-11ea-a180-0242ac110008 to disappear
Feb 17 13:09:13.353: INFO: Pod client-containers-a79be95d-5186-11ea-a180-0242ac110008 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 17 13:09:13.353: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-containers-9kqlh" for this suite.
Feb 17 13:09:19.649: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 17 13:09:19.730: INFO: namespace: e2e-tests-containers-9kqlh, resource: bindings, ignored listing per whitelist
Feb 17 13:09:19.842: INFO: namespace e2e-tests-containers-9kqlh deletion completed in 6.469352465s

• [SLOW TEST:23.301 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 17 13:09:19.843: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-b5a3f591-5186-11ea-a180-0242ac110008
STEP: Creating a pod to test consume configMaps
Feb 17 13:09:20.344: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-b5a5f71a-5186-11ea-a180-0242ac110008" in namespace "e2e-tests-projected-sl2jk" to be "success or failure"
Feb 17 13:09:20.357: INFO: Pod "pod-projected-configmaps-b5a5f71a-5186-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 12.887805ms
Feb 17 13:09:22.465: INFO: Pod "pod-projected-configmaps-b5a5f71a-5186-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.12077823s
Feb 17 13:09:24.492: INFO: Pod "pod-projected-configmaps-b5a5f71a-5186-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.147973052s
Feb 17 13:09:26.524: INFO: Pod "pod-projected-configmaps-b5a5f71a-5186-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.179419656s
Feb 17 13:09:28.547: INFO: Pod "pod-projected-configmaps-b5a5f71a-5186-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.202359693s
Feb 17 13:09:33.182: INFO: Pod "pod-projected-configmaps-b5a5f71a-5186-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 12.83759801s
Feb 17 13:09:35.219: INFO: Pod "pod-projected-configmaps-b5a5f71a-5186-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 14.874980129s
Feb 17 13:09:37.239: INFO: Pod "pod-projected-configmaps-b5a5f71a-5186-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 16.894229833s
Feb 17 13:09:39.251: INFO: Pod "pod-projected-configmaps-b5a5f71a-5186-11ea-a180-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 18.907096619s
STEP: Saw pod success
Feb 17 13:09:39.252: INFO: Pod "pod-projected-configmaps-b5a5f71a-5186-11ea-a180-0242ac110008" satisfied condition "success or failure"
Feb 17 13:09:39.265: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-b5a5f71a-5186-11ea-a180-0242ac110008 container projected-configmap-volume-test: 
STEP: delete the pod
Feb 17 13:09:40.552: INFO: Waiting for pod pod-projected-configmaps-b5a5f71a-5186-11ea-a180-0242ac110008 to disappear
Feb 17 13:09:40.865: INFO: Pod pod-projected-configmaps-b5a5f71a-5186-11ea-a180-0242ac110008 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 17 13:09:40.865: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-sl2jk" for this suite.
Feb 17 13:09:47.208: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 17 13:09:47.325: INFO: namespace: e2e-tests-projected-sl2jk, resource: bindings, ignored listing per whitelist
Feb 17 13:09:47.360: INFO: namespace e2e-tests-projected-sl2jk deletion completed in 6.471356476s

• [SLOW TEST:27.517 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-api-machinery] Garbage collector 
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 17 13:09:47.361: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the rc
STEP: delete the rc
STEP: wait for the rc to be deleted
Feb 17 13:09:58.795: INFO: 0 pods remaining
Feb 17 13:09:58.796: INFO: 0 pods has nil DeletionTimestamp
Feb 17 13:09:58.796: INFO: 
Feb 17 13:09:59.002: INFO: 0 pods remaining
Feb 17 13:09:59.002: INFO: 0 pods has nil DeletionTimestamp
Feb 17 13:09:59.002: INFO: 
STEP: Gathering metrics
W0217 13:09:59.941056       8 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Feb 17 13:09:59.941: INFO: For apiserver_request_count:
For apiserver_request_latencies_summary:
For etcd_helper_cache_entry_count:
For etcd_helper_cache_hit_count:
For etcd_helper_cache_miss_count:
For etcd_request_cache_add_latencies_summary:
For etcd_request_cache_get_latencies_summary:
For etcd_request_latencies_summary:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 17 13:09:59.941: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-gc-bv2pg" for this suite.
Feb 17 13:10:14.329: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 17 13:10:15.760: INFO: namespace: e2e-tests-gc-bv2pg, resource: bindings, ignored listing per whitelist
Feb 17 13:10:16.019: INFO: namespace e2e-tests-gc-bv2pg deletion completed in 16.06945856s

• [SLOW TEST:28.659 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 17 13:10:16.020: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test substitution in container's args
Feb 17 13:10:16.886: INFO: Waiting up to 5m0s for pod "var-expansion-d74f0653-5186-11ea-a180-0242ac110008" in namespace "e2e-tests-var-expansion-dzrr4" to be "success or failure"
Feb 17 13:10:16.960: INFO: Pod "var-expansion-d74f0653-5186-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 73.903848ms
Feb 17 13:10:18.976: INFO: Pod "var-expansion-d74f0653-5186-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.090534161s
Feb 17 13:10:21.671: INFO: Pod "var-expansion-d74f0653-5186-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.784982317s
Feb 17 13:10:23.691: INFO: Pod "var-expansion-d74f0653-5186-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.804923908s
Feb 17 13:10:25.702: INFO: Pod "var-expansion-d74f0653-5186-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.816264243s
Feb 17 13:10:27.713: INFO: Pod "var-expansion-d74f0653-5186-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 10.826592471s
Feb 17 13:10:29.734: INFO: Pod "var-expansion-d74f0653-5186-11ea-a180-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.848371725s
STEP: Saw pod success
Feb 17 13:10:29.734: INFO: Pod "var-expansion-d74f0653-5186-11ea-a180-0242ac110008" satisfied condition "success or failure"
Feb 17 13:10:29.742: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod var-expansion-d74f0653-5186-11ea-a180-0242ac110008 container dapi-container: 
STEP: delete the pod
Feb 17 13:10:30.426: INFO: Waiting for pod var-expansion-d74f0653-5186-11ea-a180-0242ac110008 to disappear
Feb 17 13:10:30.530: INFO: Pod var-expansion-d74f0653-5186-11ea-a180-0242ac110008 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 17 13:10:30.531: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-var-expansion-dzrr4" for this suite.
Feb 17 13:10:38.635: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 17 13:10:38.712: INFO: namespace: e2e-tests-var-expansion-dzrr4, resource: bindings, ignored listing per whitelist
Feb 17 13:10:38.895: INFO: namespace e2e-tests-var-expansion-dzrr4 deletion completed in 8.31473618s

• [SLOW TEST:22.875 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 17 13:10:38.897: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test substitution in container's command
Feb 17 13:10:39.163: INFO: Waiting up to 5m0s for pod "var-expansion-e4a22153-5186-11ea-a180-0242ac110008" in namespace "e2e-tests-var-expansion-2p4fv" to be "success or failure"
Feb 17 13:10:39.299: INFO: Pod "var-expansion-e4a22153-5186-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 135.858577ms
Feb 17 13:10:41.349: INFO: Pod "var-expansion-e4a22153-5186-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.185813526s
Feb 17 13:10:43.379: INFO: Pod "var-expansion-e4a22153-5186-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.215567671s
Feb 17 13:10:46.080: INFO: Pod "var-expansion-e4a22153-5186-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.916658568s
Feb 17 13:10:48.092: INFO: Pod "var-expansion-e4a22153-5186-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.928978262s
Feb 17 13:10:50.110: INFO: Pod "var-expansion-e4a22153-5186-11ea-a180-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.946484274s
STEP: Saw pod success
Feb 17 13:10:50.110: INFO: Pod "var-expansion-e4a22153-5186-11ea-a180-0242ac110008" satisfied condition "success or failure"
Feb 17 13:10:50.116: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod var-expansion-e4a22153-5186-11ea-a180-0242ac110008 container dapi-container: 
STEP: delete the pod
Feb 17 13:10:50.707: INFO: Waiting for pod var-expansion-e4a22153-5186-11ea-a180-0242ac110008 to disappear
Feb 17 13:10:50.719: INFO: Pod var-expansion-e4a22153-5186-11ea-a180-0242ac110008 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 17 13:10:50.719: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-var-expansion-2p4fv" for this suite.
Feb 17 13:10:56.848: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 17 13:10:56.952: INFO: namespace: e2e-tests-var-expansion-2p4fv, resource: bindings, ignored listing per whitelist
Feb 17 13:10:57.028: INFO: namespace e2e-tests-var-expansion-2p4fv deletion completed in 6.303833097s

• [SLOW TEST:18.131 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 17 13:10:57.029: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43
[It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
Feb 17 13:10:57.225: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 17 13:11:15.498: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-init-container-pzghb" for this suite.
Feb 17 13:11:21.709: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 17 13:11:21.856: INFO: namespace: e2e-tests-init-container-pzghb, resource: bindings, ignored listing per whitelist
Feb 17 13:11:22.155: INFO: namespace e2e-tests-init-container-pzghb deletion completed in 6.615949485s

• [SLOW TEST:25.127 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-api-machinery] Secrets 
  should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 17 13:11:22.156: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-fe69938e-5186-11ea-a180-0242ac110008
STEP: Creating a pod to test consume secrets
Feb 17 13:11:22.491: INFO: Waiting up to 5m0s for pod "pod-secrets-fe739c13-5186-11ea-a180-0242ac110008" in namespace "e2e-tests-secrets-v2f9g" to be "success or failure"
Feb 17 13:11:22.506: INFO: Pod "pod-secrets-fe739c13-5186-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 15.371344ms
Feb 17 13:11:24.535: INFO: Pod "pod-secrets-fe739c13-5186-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.04356621s
Feb 17 13:11:26.587: INFO: Pod "pod-secrets-fe739c13-5186-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.096174872s
Feb 17 13:11:28.658: INFO: Pod "pod-secrets-fe739c13-5186-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.16658364s
Feb 17 13:11:30.710: INFO: Pod "pod-secrets-fe739c13-5186-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.218754768s
Feb 17 13:11:32.723: INFO: Pod "pod-secrets-fe739c13-5186-11ea-a180-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.231833467s
STEP: Saw pod success
Feb 17 13:11:32.723: INFO: Pod "pod-secrets-fe739c13-5186-11ea-a180-0242ac110008" satisfied condition "success or failure"
Feb 17 13:11:32.728: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-fe739c13-5186-11ea-a180-0242ac110008 container secret-env-test: 
STEP: delete the pod
Feb 17 13:11:33.905: INFO: Waiting for pod pod-secrets-fe739c13-5186-11ea-a180-0242ac110008 to disappear
Feb 17 13:11:33.932: INFO: Pod pod-secrets-fe739c13-5186-11ea-a180-0242ac110008 no longer exists
[AfterEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 17 13:11:33.933: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-v2f9g" for this suite.
Feb 17 13:11:40.054: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 17 13:11:40.125: INFO: namespace: e2e-tests-secrets-v2f9g, resource: bindings, ignored listing per whitelist
Feb 17 13:11:40.200: INFO: namespace e2e-tests-secrets-v2f9g deletion completed in 6.247396126s

• [SLOW TEST:18.044 seconds]
[sig-api-machinery] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:32
  should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 17 13:11:40.200: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test override command
Feb 17 13:11:40.433: INFO: Waiting up to 5m0s for pod "client-containers-0924ab1f-5187-11ea-a180-0242ac110008" in namespace "e2e-tests-containers-skksk" to be "success or failure"
Feb 17 13:11:40.445: INFO: Pod "client-containers-0924ab1f-5187-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 11.77912ms
Feb 17 13:11:42.643: INFO: Pod "client-containers-0924ab1f-5187-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 2.209696999s
Feb 17 13:11:44.653: INFO: Pod "client-containers-0924ab1f-5187-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 4.219986572s
Feb 17 13:11:46.686: INFO: Pod "client-containers-0924ab1f-5187-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 6.253111727s
Feb 17 13:11:48.804: INFO: Pod "client-containers-0924ab1f-5187-11ea-a180-0242ac110008": Phase="Pending", Reason="", readiness=false. Elapsed: 8.371355082s
Feb 17 13:11:50.841: INFO: Pod "client-containers-0924ab1f-5187-11ea-a180-0242ac110008": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.408290779s
STEP: Saw pod success
Feb 17 13:11:50.842: INFO: Pod "client-containers-0924ab1f-5187-11ea-a180-0242ac110008" satisfied condition "success or failure"
Feb 17 13:11:50.850: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod client-containers-0924ab1f-5187-11ea-a180-0242ac110008 container test-container: 
STEP: delete the pod
Feb 17 13:11:51.117: INFO: Waiting for pod client-containers-0924ab1f-5187-11ea-a180-0242ac110008 to disappear
Feb 17 13:11:51.125: INFO: Pod client-containers-0924ab1f-5187-11ea-a180-0242ac110008 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 17 13:11:51.125: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-containers-skksk" for this suite.
Feb 17 13:11:57.255: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 17 13:11:57.391: INFO: namespace: e2e-tests-containers-skksk, resource: bindings, ignored listing per whitelist
Feb 17 13:11:57.423: INFO: namespace e2e-tests-containers-skksk deletion completed in 6.288420495s

• [SLOW TEST:17.223 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl replace 
  should update a single-container pod's image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 17 13:11:57.424: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1563
[It] should update a single-container pod's image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Feb 17 13:11:57.611: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --labels=run=e2e-test-nginx-pod --namespace=e2e-tests-kubectl-pg8v4'
Feb 17 13:11:59.915: INFO: stderr: ""
Feb 17 13:11:59.916: INFO: stdout: "pod/e2e-test-nginx-pod created\n"
STEP: verifying the pod e2e-test-nginx-pod is running
STEP: verifying the pod e2e-test-nginx-pod was created
Feb 17 13:12:09.969: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod e2e-test-nginx-pod --namespace=e2e-tests-kubectl-pg8v4 -o json'
Feb 17 13:12:10.120: INFO: stderr: ""
Feb 17 13:12:10.121: INFO: stdout: "{\n    \"apiVersion\": \"v1\",\n    \"kind\": \"Pod\",\n    \"metadata\": {\n        \"creationTimestamp\": \"2020-02-17T13:11:59Z\",\n        \"labels\": {\n            \"run\": \"e2e-test-nginx-pod\"\n        },\n        \"name\": \"e2e-test-nginx-pod\",\n        \"namespace\": \"e2e-tests-kubectl-pg8v4\",\n        \"resourceVersion\": \"21984326\",\n        \"selfLink\": \"/api/v1/namespaces/e2e-tests-kubectl-pg8v4/pods/e2e-test-nginx-pod\",\n        \"uid\": \"14bf3205-5187-11ea-a994-fa163e34d433\"\n    },\n    \"spec\": {\n        \"containers\": [\n            {\n                \"image\": \"docker.io/library/nginx:1.14-alpine\",\n                \"imagePullPolicy\": \"IfNotPresent\",\n                \"name\": \"e2e-test-nginx-pod\",\n                \"resources\": {},\n                \"terminationMessagePath\": \"/dev/termination-log\",\n                \"terminationMessagePolicy\": \"File\",\n                \"volumeMounts\": [\n                    {\n                        \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n                        \"name\": \"default-token-gkdhr\",\n                        \"readOnly\": true\n                    }\n                ]\n            }\n        ],\n        \"dnsPolicy\": \"ClusterFirst\",\n        \"enableServiceLinks\": true,\n        \"nodeName\": \"hunter-server-hu5at5svl7ps\",\n        \"priority\": 0,\n        \"restartPolicy\": \"Always\",\n        \"schedulerName\": \"default-scheduler\",\n        \"securityContext\": {},\n        \"serviceAccount\": \"default\",\n        \"serviceAccountName\": \"default\",\n        \"terminationGracePeriodSeconds\": 30,\n        \"tolerations\": [\n            {\n                \"effect\": \"NoExecute\",\n                \"key\": \"node.kubernetes.io/not-ready\",\n                \"operator\": \"Exists\",\n                \"tolerationSeconds\": 300\n            },\n            {\n                \"effect\": \"NoExecute\",\n                \"key\": \"node.kubernetes.io/unreachable\",\n                \"operator\": \"Exists\",\n                \"tolerationSeconds\": 300\n            }\n        ],\n        \"volumes\": [\n            {\n                \"name\": \"default-token-gkdhr\",\n                \"secret\": {\n                    \"defaultMode\": 420,\n                    \"secretName\": \"default-token-gkdhr\"\n                }\n            }\n        ]\n    },\n    \"status\": {\n        \"conditions\": [\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-02-17T13:12:00Z\",\n                \"status\": \"True\",\n                \"type\": \"Initialized\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-02-17T13:12:09Z\",\n                \"status\": \"True\",\n                \"type\": \"Ready\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-02-17T13:12:09Z\",\n                \"status\": \"True\",\n                \"type\": \"ContainersReady\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-02-17T13:11:59Z\",\n                \"status\": \"True\",\n                \"type\": \"PodScheduled\"\n            }\n        ],\n        \"containerStatuses\": [\n            {\n                \"containerID\": \"docker://0455fad46b90170e119732924eca6b542b64e7b55bbbe036c348076e1c7cea30\",\n                \"image\": \"nginx:1.14-alpine\",\n                \"imageID\": \"docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7\",\n                \"lastState\": {},\n                \"name\": \"e2e-test-nginx-pod\",\n                \"ready\": true,\n                \"restartCount\": 0,\n                \"state\": {\n                    \"running\": {\n                        \"startedAt\": \"2020-02-17T13:12:07Z\"\n                    }\n                }\n            }\n        ],\n        \"hostIP\": \"10.96.1.240\",\n        \"phase\": \"Running\",\n        \"podIP\": \"10.32.0.4\",\n        \"qosClass\": \"BestEffort\",\n        \"startTime\": \"2020-02-17T13:12:00Z\"\n    }\n}\n"
STEP: replace the image in the pod
Feb 17 13:12:10.121: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config replace -f - --namespace=e2e-tests-kubectl-pg8v4'
Feb 17 13:12:10.416: INFO: stderr: ""
Feb 17 13:12:10.416: INFO: stdout: "pod/e2e-test-nginx-pod replaced\n"
STEP: verifying the pod e2e-test-nginx-pod has the right image docker.io/library/busybox:1.29
[AfterEach] [k8s.io] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1568
Feb 17 13:12:10.432: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=e2e-tests-kubectl-pg8v4'
Feb 17 13:12:19.134: INFO: stderr: ""
Feb 17 13:12:19.134: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 17 13:12:19.135: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-pg8v4" for this suite.
Feb 17 13:12:25.365: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 17 13:12:25.577: INFO: namespace: e2e-tests-kubectl-pg8v4, resource: bindings, ignored listing per whitelist
Feb 17 13:12:25.597: INFO: namespace e2e-tests-kubectl-pg8v4 deletion completed in 6.450525621s

• [SLOW TEST:28.174 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should update a single-container pod's image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: udp [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 17 13:12:25.598: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for intra-pod communication: udp [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-xw9rg
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Feb 17 13:12:25.780: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Feb 17 13:13:02.179: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.32.0.5:8080/dial?request=hostName&protocol=udp&host=10.32.0.4&port=8081&tries=1'] Namespace:e2e-tests-pod-network-test-xw9rg PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 17 13:13:02.180: INFO: >>> kubeConfig: /root/.kube/config
I0217 13:13:02.250132       8 log.go:172] (0xc001e5a580) (0xc000b15900) Create stream
I0217 13:13:02.250203       8 log.go:172] (0xc001e5a580) (0xc000b15900) Stream added, broadcasting: 1
I0217 13:13:02.255674       8 log.go:172] (0xc001e5a580) Reply frame received for 1
I0217 13:13:02.255760       8 log.go:172] (0xc001e5a580) (0xc00089fa40) Create stream
I0217 13:13:02.255770       8 log.go:172] (0xc001e5a580) (0xc00089fa40) Stream added, broadcasting: 3
I0217 13:13:02.257080       8 log.go:172] (0xc001e5a580) Reply frame received for 3
I0217 13:13:02.257103       8 log.go:172] (0xc001e5a580) (0xc000b159a0) Create stream
I0217 13:13:02.257110       8 log.go:172] (0xc001e5a580) (0xc000b159a0) Stream added, broadcasting: 5
I0217 13:13:02.258185       8 log.go:172] (0xc001e5a580) Reply frame received for 5
I0217 13:13:02.423139       8 log.go:172] (0xc001e5a580) Data frame received for 3
I0217 13:13:02.423249       8 log.go:172] (0xc00089fa40) (3) Data frame handling
I0217 13:13:02.423274       8 log.go:172] (0xc00089fa40) (3) Data frame sent
I0217 13:13:02.700198       8 log.go:172] (0xc001e5a580) Data frame received for 1
I0217 13:13:02.700382       8 log.go:172] (0xc001e5a580) (0xc00089fa40) Stream removed, broadcasting: 3
I0217 13:13:02.700463       8 log.go:172] (0xc000b15900) (1) Data frame handling
I0217 13:13:02.700483       8 log.go:172] (0xc000b15900) (1) Data frame sent
I0217 13:13:02.700493       8 log.go:172] (0xc001e5a580) (0xc000b15900) Stream removed, broadcasting: 1
I0217 13:13:02.700558       8 log.go:172] (0xc001e5a580) (0xc000b159a0) Stream removed, broadcasting: 5
I0217 13:13:02.700623       8 log.go:172] (0xc001e5a580) Go away received
I0217 13:13:02.700765       8 log.go:172] (0xc001e5a580) (0xc000b15900) Stream removed, broadcasting: 1
I0217 13:13:02.700779       8 log.go:172] (0xc001e5a580) (0xc00089fa40) Stream removed, broadcasting: 3
I0217 13:13:02.700792       8 log.go:172] (0xc001e5a580) (0xc000b159a0) Stream removed, broadcasting: 5
Feb 17 13:13:02.701: INFO: Waiting for endpoints: map[]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 17 13:13:02.701: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pod-network-test-xw9rg" for this suite.
Feb 17 13:13:28.934: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 17 13:13:28.993: INFO: namespace: e2e-tests-pod-network-test-xw9rg, resource: bindings, ignored listing per whitelist
Feb 17 13:13:29.082: INFO: namespace e2e-tests-pod-network-test-xw9rg deletion completed in 26.34747706s

• [SLOW TEST:63.485 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for intra-pod communication: udp [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 17 13:13:29.083: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a test headless service
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-2w6tq A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.e2e-tests-dns-2w6tq;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-2w6tq A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.e2e-tests-dns-2w6tq;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-2w6tq.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.e2e-tests-dns-2w6tq.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-2w6tq.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.e2e-tests-dns-2w6tq.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-2w6tq.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-2w6tq.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-2w6tq.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.e2e-tests-dns-2w6tq.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-2w6tq.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.e2e-tests-dns-2w6tq.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-2w6tq.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.e2e-tests-dns-2w6tq.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-2w6tq.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 188.128.103.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.103.128.188_udp@PTR;check="$$(dig +tcp +noall +answer +search 188.128.103.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.103.128.188_tcp@PTR;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-2w6tq A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.e2e-tests-dns-2w6tq;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-2w6tq A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.e2e-tests-dns-2w6tq;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-2w6tq.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.e2e-tests-dns-2w6tq.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-2w6tq.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.e2e-tests-dns-2w6tq.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-2w6tq.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-2w6tq.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-2w6tq.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-2w6tq.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-2w6tq.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.e2e-tests-dns-2w6tq.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-2w6tq.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.e2e-tests-dns-2w6tq.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-2w6tq.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 188.128.103.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.103.128.188_udp@PTR;check="$$(dig +tcp +noall +answer +search 188.128.103.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.103.128.188_tcp@PTR;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Feb 17 13:13:46.160: INFO: Unable to read 10.103.128.188_tcp@PTR from pod e2e-tests-dns-2w6tq/dns-test-4a511e30-5187-11ea-a180-0242ac110008: the server could not find the requested resource (get pods dns-test-4a511e30-5187-11ea-a180-0242ac110008)
Feb 17 13:13:46.164: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-2w6tq/dns-test-4a511e30-5187-11ea-a180-0242ac110008: the server could not find the requested resource (get pods dns-test-4a511e30-5187-11ea-a180-0242ac110008)
Feb 17 13:13:46.168: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-2w6tq/dns-test-4a511e30-5187-11ea-a180-0242ac110008: the server could not find the requested resource (get pods dns-test-4a511e30-5187-11ea-a180-0242ac110008)
Feb 17 13:13:46.173: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-2w6tq from pod e2e-tests-dns-2w6tq/dns-test-4a511e30-5187-11ea-a180-0242ac110008: the server could not find the requested resource (get pods dns-test-4a511e30-5187-11ea-a180-0242ac110008)
Feb 17 13:13:46.181: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-2w6tq from pod e2e-tests-dns-2w6tq/dns-test-4a511e30-5187-11ea-a180-0242ac110008: the server could not find the requested resource (get pods dns-test-4a511e30-5187-11ea-a180-0242ac110008)
Feb 17 13:13:46.187: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-2w6tq.svc from pod e2e-tests-dns-2w6tq/dns-test-4a511e30-5187-11ea-a180-0242ac110008: the server could not find the requested resource (get pods dns-test-4a511e30-5187-11ea-a180-0242ac110008)
Feb 17 13:13:46.195: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-2w6tq.svc from pod e2e-tests-dns-2w6tq/dns-test-4a511e30-5187-11ea-a180-0242ac110008: the server could not find the requested resource (get pods dns-test-4a511e30-5187-11ea-a180-0242ac110008)
Feb 17 13:13:46.199: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-2w6tq.svc from pod e2e-tests-dns-2w6tq/dns-test-4a511e30-5187-11ea-a180-0242ac110008: the server could not find the requested resource (get pods dns-test-4a511e30-5187-11ea-a180-0242ac110008)
Feb 17 13:13:46.202: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-2w6tq.svc from pod e2e-tests-dns-2w6tq/dns-test-4a511e30-5187-11ea-a180-0242ac110008: the server could not find the requested resource (get pods dns-test-4a511e30-5187-11ea-a180-0242ac110008)
Feb 17 13:13:46.207: INFO: Unable to read jessie_udp@_http._tcp.test-service-2.e2e-tests-dns-2w6tq.svc from pod e2e-tests-dns-2w6tq/dns-test-4a511e30-5187-11ea-a180-0242ac110008: the server could not find the requested resource (get pods dns-test-4a511e30-5187-11ea-a180-0242ac110008)
Feb 17 13:13:46.216: INFO: Unable to read jessie_tcp@_http._tcp.test-service-2.e2e-tests-dns-2w6tq.svc from pod e2e-tests-dns-2w6tq/dns-test-4a511e30-5187-11ea-a180-0242ac110008: the server could not find the requested resource (get pods dns-test-4a511e30-5187-11ea-a180-0242ac110008)
Feb 17 13:13:46.220: INFO: Unable to read jessie_udp@PodARecord from pod e2e-tests-dns-2w6tq/dns-test-4a511e30-5187-11ea-a180-0242ac110008: the server could not find the requested resource (get pods dns-test-4a511e30-5187-11ea-a180-0242ac110008)
Feb 17 13:13:46.227: INFO: Unable to read jessie_tcp@PodARecord from pod e2e-tests-dns-2w6tq/dns-test-4a511e30-5187-11ea-a180-0242ac110008: the server could not find the requested resource (get pods dns-test-4a511e30-5187-11ea-a180-0242ac110008)
Feb 17 13:13:46.239: INFO: Lookups using e2e-tests-dns-2w6tq/dns-test-4a511e30-5187-11ea-a180-0242ac110008 failed for: [10.103.128.188_tcp@PTR jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-2w6tq jessie_tcp@dns-test-service.e2e-tests-dns-2w6tq jessie_udp@dns-test-service.e2e-tests-dns-2w6tq.svc jessie_tcp@dns-test-service.e2e-tests-dns-2w6tq.svc jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-2w6tq.svc jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-2w6tq.svc jessie_udp@_http._tcp.test-service-2.e2e-tests-dns-2w6tq.svc jessie_tcp@_http._tcp.test-service-2.e2e-tests-dns-2w6tq.svc jessie_udp@PodARecord jessie_tcp@PodARecord]

Feb 17 13:13:51.929: INFO: DNS probes using e2e-tests-dns-2w6tq/dns-test-4a511e30-5187-11ea-a180-0242ac110008 succeeded

STEP: deleting the pod
STEP: deleting the test service
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 17 13:13:52.636: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-dns-2w6tq" for this suite.
Feb 17 13:13:59.942: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 17 13:14:00.009: INFO: namespace: e2e-tests-dns-2w6tq, resource: bindings, ignored listing per whitelist
Feb 17 13:14:00.114: INFO: namespace e2e-tests-dns-2w6tq deletion completed in 6.56416081s

• [SLOW TEST:31.031 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[k8s.io] Pods 
  should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 17 13:14:00.114: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Feb 17 13:14:10.954: INFO: Successfully updated pod "pod-update-5c82c705-5187-11ea-a180-0242ac110008"
STEP: verifying the updated pod is in kubernetes
Feb 17 13:14:10.988: INFO: Pod update OK
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 17 13:14:10.988: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-cbvfh" for this suite.
Feb 17 13:14:35.077: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 17 13:14:35.230: INFO: namespace: e2e-tests-pods-cbvfh, resource: bindings, ignored listing per whitelist
Feb 17 13:14:35.241: INFO: namespace e2e-tests-pods-cbvfh deletion completed in 24.246857603s

• [SLOW TEST:35.127 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSFeb 17 13:14:35.241: INFO: Running AfterSuite actions on all nodes
Feb 17 13:14:35.241: INFO: Running AfterSuite actions on node 1
Feb 17 13:14:35.241: INFO: Skipping dumping logs from cluster

Ran 199 of 2164 Specs in 8809.757 seconds
SUCCESS! -- 199 Passed | 0 Failed | 0 Pending | 1965 Skipped PASS