I0506 17:16:36.840778 6 e2e.go:224] Starting e2e run "573e746e-8fbd-11ea-a618-0242ac110019" on Ginkgo node 1 Running Suite: Kubernetes e2e suite =================================== Random Seed: 1588785396 - Will randomize all specs Will run 201 of 2164 specs May 6 17:16:37.023: INFO: >>> kubeConfig: /root/.kube/config May 6 17:16:37.026: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable May 6 17:16:37.044: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready May 6 17:16:37.070: INFO: 12 / 12 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) May 6 17:16:37.070: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. May 6 17:16:37.070: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start May 6 17:16:37.077: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) May 6 17:16:37.077: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) May 6 17:16:37.077: INFO: e2e test version: v1.13.12 May 6 17:16:37.078: INFO: kube-apiserver version: v1.13.12 SSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 17:16:37.078: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath May 6 17:16:37.228: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with configmap pod with mountPath of existing file [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod pod-subpath-test-configmap-nq9r STEP: Creating a pod to test atomic-volume-subpath May 6 17:16:37.243: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-nq9r" in namespace "e2e-tests-subpath-cv2x7" to be "success or failure" May 6 17:16:37.264: INFO: Pod "pod-subpath-test-configmap-nq9r": Phase="Pending", Reason="", readiness=false. Elapsed: 21.167506ms May 6 17:16:39.322: INFO: Pod "pod-subpath-test-configmap-nq9r": Phase="Pending", Reason="", readiness=false. Elapsed: 2.078970061s May 6 17:16:41.325: INFO: Pod "pod-subpath-test-configmap-nq9r": Phase="Pending", Reason="", readiness=false. Elapsed: 4.082165468s May 6 17:16:43.328: INFO: Pod "pod-subpath-test-configmap-nq9r": Phase="Running", Reason="", readiness=true. Elapsed: 6.085793944s May 6 17:16:45.333: INFO: Pod "pod-subpath-test-configmap-nq9r": Phase="Running", Reason="", readiness=false. Elapsed: 8.090434664s May 6 17:16:47.337: INFO: Pod "pod-subpath-test-configmap-nq9r": Phase="Running", Reason="", readiness=false. Elapsed: 10.094824226s May 6 17:16:49.342: INFO: Pod "pod-subpath-test-configmap-nq9r": Phase="Running", Reason="", readiness=false. Elapsed: 12.099206288s May 6 17:16:51.346: INFO: Pod "pod-subpath-test-configmap-nq9r": Phase="Running", Reason="", readiness=false. Elapsed: 14.103763053s May 6 17:16:53.350: INFO: Pod "pod-subpath-test-configmap-nq9r": Phase="Running", Reason="", readiness=false. Elapsed: 16.107815975s May 6 17:16:55.354: INFO: Pod "pod-subpath-test-configmap-nq9r": Phase="Running", Reason="", readiness=false. Elapsed: 18.111886281s May 6 17:16:57.435: INFO: Pod "pod-subpath-test-configmap-nq9r": Phase="Running", Reason="", readiness=false. Elapsed: 20.192680741s May 6 17:16:59.439: INFO: Pod "pod-subpath-test-configmap-nq9r": Phase="Running", Reason="", readiness=false. Elapsed: 22.19657798s May 6 17:17:01.443: INFO: Pod "pod-subpath-test-configmap-nq9r": Phase="Running", Reason="", readiness=false. Elapsed: 24.200732802s May 6 17:17:03.448: INFO: Pod "pod-subpath-test-configmap-nq9r": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.205059176s STEP: Saw pod success May 6 17:17:03.448: INFO: Pod "pod-subpath-test-configmap-nq9r" satisfied condition "success or failure" May 6 17:17:03.451: INFO: Trying to get logs from node hunter-worker pod pod-subpath-test-configmap-nq9r container test-container-subpath-configmap-nq9r: STEP: delete the pod May 6 17:17:03.505: INFO: Waiting for pod pod-subpath-test-configmap-nq9r to disappear May 6 17:17:03.514: INFO: Pod pod-subpath-test-configmap-nq9r no longer exists STEP: Deleting pod pod-subpath-test-configmap-nq9r May 6 17:17:03.514: INFO: Deleting pod "pod-subpath-test-configmap-nq9r" in namespace "e2e-tests-subpath-cv2x7" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 17:17:03.516: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-subpath-cv2x7" for this suite. May 6 17:17:09.530: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 17:17:09.602: INFO: namespace: e2e-tests-subpath-cv2x7, resource: bindings, ignored listing per whitelist May 6 17:17:09.621: INFO: namespace e2e-tests-subpath-cv2x7 deletion completed in 6.101754403s • [SLOW TEST:32.543 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with configmap pod with mountPath of existing file [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 17:17:09.621: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin May 6 17:17:09.778: INFO: Waiting up to 5m0s for pod "downwardapi-volume-6b260e9b-8fbd-11ea-a618-0242ac110019" in namespace "e2e-tests-projected-ddtfq" to be "success or failure" May 6 17:17:09.811: INFO: Pod "downwardapi-volume-6b260e9b-8fbd-11ea-a618-0242ac110019": Phase="Pending", Reason="", readiness=false. Elapsed: 33.290148ms May 6 17:17:11.815: INFO: Pod "downwardapi-volume-6b260e9b-8fbd-11ea-a618-0242ac110019": Phase="Pending", Reason="", readiness=false. Elapsed: 2.037218208s May 6 17:17:13.982: INFO: Pod "downwardapi-volume-6b260e9b-8fbd-11ea-a618-0242ac110019": Phase="Pending", Reason="", readiness=false. Elapsed: 4.203944335s May 6 17:17:16.075: INFO: Pod "downwardapi-volume-6b260e9b-8fbd-11ea-a618-0242ac110019": Phase="Pending", Reason="", readiness=false. Elapsed: 6.296807519s May 6 17:17:18.079: INFO: Pod "downwardapi-volume-6b260e9b-8fbd-11ea-a618-0242ac110019": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.301591557s STEP: Saw pod success May 6 17:17:18.079: INFO: Pod "downwardapi-volume-6b260e9b-8fbd-11ea-a618-0242ac110019" satisfied condition "success or failure" May 6 17:17:18.083: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-6b260e9b-8fbd-11ea-a618-0242ac110019 container client-container: STEP: delete the pod May 6 17:17:18.685: INFO: Waiting for pod downwardapi-volume-6b260e9b-8fbd-11ea-a618-0242ac110019 to disappear May 6 17:17:18.690: INFO: Pod downwardapi-volume-6b260e9b-8fbd-11ea-a618-0242ac110019 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 17:17:18.690: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-ddtfq" for this suite. May 6 17:17:24.771: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 17:17:24.824: INFO: namespace: e2e-tests-projected-ddtfq, resource: bindings, ignored listing per whitelist May 6 17:17:24.845: INFO: namespace e2e-tests-projected-ddtfq deletion completed in 6.151520628s • [SLOW TEST:15.224 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 17:17:24.845: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102 [It] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 6 17:17:25.087: INFO: Requires at least 2 nodes (not -1) [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68 May 6 17:17:25.096: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-vp6z9/daemonsets","resourceVersion":"9081831"},"items":null} May 6 17:17:25.098: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-vp6z9/pods","resourceVersion":"9081831"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 17:17:25.108: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-daemonsets-vp6z9" for this suite. May 6 17:17:31.274: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 17:17:31.303: INFO: namespace: e2e-tests-daemonsets-vp6z9, resource: bindings, ignored listing per whitelist May 6 17:17:31.348: INFO: namespace e2e-tests-daemonsets-vp6z9 deletion completed in 6.236191934s S [SKIPPING] [6.503 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should rollback without unnecessary restarts [Conformance] [It] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 6 17:17:25.087: Requires at least 2 nodes (not -1) /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:292 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 17:17:31.348: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-783118cd-8fbd-11ea-a618-0242ac110019 STEP: Creating a pod to test consume configMaps May 6 17:17:31.655: INFO: Waiting up to 5m0s for pod "pod-configmaps-7833f33f-8fbd-11ea-a618-0242ac110019" in namespace "e2e-tests-configmap-zs6m7" to be "success or failure" May 6 17:17:31.659: INFO: Pod "pod-configmaps-7833f33f-8fbd-11ea-a618-0242ac110019": Phase="Pending", Reason="", readiness=false. Elapsed: 3.866676ms May 6 17:17:33.662: INFO: Pod "pod-configmaps-7833f33f-8fbd-11ea-a618-0242ac110019": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007079852s May 6 17:17:35.671: INFO: Pod "pod-configmaps-7833f33f-8fbd-11ea-a618-0242ac110019": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.016768852s STEP: Saw pod success May 6 17:17:35.671: INFO: Pod "pod-configmaps-7833f33f-8fbd-11ea-a618-0242ac110019" satisfied condition "success or failure" May 6 17:17:35.674: INFO: Trying to get logs from node hunter-worker pod pod-configmaps-7833f33f-8fbd-11ea-a618-0242ac110019 container configmap-volume-test: STEP: delete the pod May 6 17:17:35.755: INFO: Waiting for pod pod-configmaps-7833f33f-8fbd-11ea-a618-0242ac110019 to disappear May 6 17:17:35.946: INFO: Pod pod-configmaps-7833f33f-8fbd-11ea-a618-0242ac110019 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 17:17:35.946: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-zs6m7" for this suite. May 6 17:17:42.005: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 17:17:42.064: INFO: namespace: e2e-tests-configmap-zs6m7, resource: bindings, ignored listing per whitelist May 6 17:17:42.077: INFO: namespace e2e-tests-configmap-zs6m7 deletion completed in 6.127167811s • [SLOW TEST:10.729 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 17:17:42.078: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43 [It] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod May 6 17:17:42.371: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 17:17:56.851: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-init-container-g9tgm" for this suite. May 6 17:18:17.015: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 17:18:17.046: INFO: namespace: e2e-tests-init-container-g9tgm, resource: bindings, ignored listing per whitelist May 6 17:18:17.086: INFO: namespace e2e-tests-init-container-g9tgm deletion completed in 20.155193547s • [SLOW TEST:35.008 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 17:18:17.087: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin May 6 17:18:17.323: INFO: Waiting up to 5m0s for pod "downwardapi-volume-936ac436-8fbd-11ea-a618-0242ac110019" in namespace "e2e-tests-downward-api-q9vzd" to be "success or failure" May 6 17:18:17.326: INFO: Pod "downwardapi-volume-936ac436-8fbd-11ea-a618-0242ac110019": Phase="Pending", Reason="", readiness=false. Elapsed: 3.365175ms May 6 17:18:19.331: INFO: Pod "downwardapi-volume-936ac436-8fbd-11ea-a618-0242ac110019": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008334462s May 6 17:18:21.336: INFO: Pod "downwardapi-volume-936ac436-8fbd-11ea-a618-0242ac110019": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013349073s STEP: Saw pod success May 6 17:18:21.336: INFO: Pod "downwardapi-volume-936ac436-8fbd-11ea-a618-0242ac110019" satisfied condition "success or failure" May 6 17:18:21.339: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-936ac436-8fbd-11ea-a618-0242ac110019 container client-container: STEP: delete the pod May 6 17:18:21.426: INFO: Waiting for pod downwardapi-volume-936ac436-8fbd-11ea-a618-0242ac110019 to disappear May 6 17:18:21.434: INFO: Pod downwardapi-volume-936ac436-8fbd-11ea-a618-0242ac110019 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 17:18:21.434: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-q9vzd" for this suite. May 6 17:18:27.462: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 17:18:27.510: INFO: namespace: e2e-tests-downward-api-q9vzd, resource: bindings, ignored listing per whitelist May 6 17:18:27.529: INFO: namespace e2e-tests-downward-api-q9vzd deletion completed in 6.091119153s • [SLOW TEST:10.442 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run pod should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 17:18:27.529: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1527 [It] should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine May 6 17:18:27.612: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=e2e-tests-kubectl-f45sb' May 6 17:18:32.378: INFO: stderr: "" May 6 17:18:32.378: INFO: stdout: "pod/e2e-test-nginx-pod created\n" STEP: verifying the pod e2e-test-nginx-pod was created [AfterEach] [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1532 May 6 17:18:32.386: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=e2e-tests-kubectl-f45sb' May 6 17:18:37.067: INFO: stderr: "" May 6 17:18:37.067: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 17:18:37.067: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-f45sb" for this suite. May 6 17:18:43.791: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 17:18:43.805: INFO: namespace: e2e-tests-kubectl-f45sb, resource: bindings, ignored listing per whitelist May 6 17:18:43.860: INFO: namespace e2e-tests-kubectl-f45sb deletion completed in 6.49990848s • [SLOW TEST:16.331 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 17:18:43.861: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted May 6 17:18:51.972: INFO: 9 pods remaining May 6 17:18:51.972: INFO: 0 pods has nil DeletionTimestamp May 6 17:18:51.972: INFO: May 6 17:18:52.863: INFO: 0 pods remaining May 6 17:18:52.863: INFO: 0 pods has nil DeletionTimestamp May 6 17:18:52.863: INFO: STEP: Gathering metrics W0506 17:18:53.773687 6 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 6 17:18:53.773: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 17:18:53.773: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-q4r28" for this suite. May 6 17:19:00.076: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 17:19:00.157: INFO: namespace: e2e-tests-gc-q4r28, resource: bindings, ignored listing per whitelist May 6 17:19:00.159: INFO: namespace e2e-tests-gc-q4r28 deletion completed in 6.38261585s • [SLOW TEST:16.298 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 17:19:00.159: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpa': should get the expected 'State' STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpof': should get the expected 'State' STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpn': should get the expected 'State' STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance] [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 17:19:41.465: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-runtime-rpbgp" for this suite. May 6 17:19:47.512: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 17:19:47.570: INFO: namespace: e2e-tests-container-runtime-rpbgp, resource: bindings, ignored listing per whitelist May 6 17:19:47.582: INFO: namespace e2e-tests-container-runtime-rpbgp deletion completed in 6.113739079s • [SLOW TEST:47.423 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:37 when starting a container that exits /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 17:19:47.582: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward api env vars May 6 17:19:47.682: INFO: Waiting up to 5m0s for pod "downward-api-c944efcf-8fbd-11ea-a618-0242ac110019" in namespace "e2e-tests-downward-api-tx7q8" to be "success or failure" May 6 17:19:47.694: INFO: Pod "downward-api-c944efcf-8fbd-11ea-a618-0242ac110019": Phase="Pending", Reason="", readiness=false. Elapsed: 11.975119ms May 6 17:19:49.894: INFO: Pod "downward-api-c944efcf-8fbd-11ea-a618-0242ac110019": Phase="Pending", Reason="", readiness=false. Elapsed: 2.212147181s May 6 17:19:51.924: INFO: Pod "downward-api-c944efcf-8fbd-11ea-a618-0242ac110019": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.241494642s STEP: Saw pod success May 6 17:19:51.924: INFO: Pod "downward-api-c944efcf-8fbd-11ea-a618-0242ac110019" satisfied condition "success or failure" May 6 17:19:51.927: INFO: Trying to get logs from node hunter-worker2 pod downward-api-c944efcf-8fbd-11ea-a618-0242ac110019 container dapi-container: STEP: delete the pod May 6 17:19:51.995: INFO: Waiting for pod downward-api-c944efcf-8fbd-11ea-a618-0242ac110019 to disappear May 6 17:19:52.337: INFO: Pod downward-api-c944efcf-8fbd-11ea-a618-0242ac110019 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 17:19:52.337: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-tx7q8" for this suite. May 6 17:19:58.540: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 17:19:58.558: INFO: namespace: e2e-tests-downward-api-tx7q8, resource: bindings, ignored listing per whitelist May 6 17:19:58.621: INFO: namespace e2e-tests-downward-api-tx7q8 deletion completed in 6.231079027s • [SLOW TEST:11.039 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38 should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 17:19:58.622: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79 May 6 17:19:58.758: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 6 17:19:58.765: INFO: Waiting for terminating namespaces to be deleted... May 6 17:19:58.767: INFO: Logging pods the kubelet thinks is on node hunter-worker before test May 6 17:19:58.773: INFO: kindnet-54h7m from kube-system started at 2020-03-15 18:23:12 +0000 UTC (1 container statuses recorded) May 6 17:19:58.773: INFO: Container kindnet-cni ready: true, restart count 0 May 6 17:19:58.773: INFO: coredns-54ff9cd656-4h7lb from kube-system started at 2020-03-15 18:23:32 +0000 UTC (1 container statuses recorded) May 6 17:19:58.773: INFO: Container coredns ready: true, restart count 0 May 6 17:19:58.773: INFO: kube-proxy-szbng from kube-system started at 2020-03-15 18:23:11 +0000 UTC (1 container statuses recorded) May 6 17:19:58.773: INFO: Container kube-proxy ready: true, restart count 0 May 6 17:19:58.773: INFO: Logging pods the kubelet thinks is on node hunter-worker2 before test May 6 17:19:58.778: INFO: kindnet-mtqrs from kube-system started at 2020-03-15 18:23:12 +0000 UTC (1 container statuses recorded) May 6 17:19:58.778: INFO: Container kindnet-cni ready: true, restart count 0 May 6 17:19:58.778: INFO: coredns-54ff9cd656-8vrkk from kube-system started at 2020-03-15 18:23:32 +0000 UTC (1 container statuses recorded) May 6 17:19:58.778: INFO: Container coredns ready: true, restart count 0 May 6 17:19:58.778: INFO: kube-proxy-s52ll from kube-system started at 2020-03-15 18:23:12 +0000 UTC (1 container statuses recorded) May 6 17:19:58.778: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.160c80159864bf99], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match node selector.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 17:19:59.799: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-sched-pred-kgzkp" for this suite. May 6 17:20:05.849: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 17:20:05.861: INFO: namespace: e2e-tests-sched-pred-kgzkp, resource: bindings, ignored listing per whitelist May 6 17:20:05.912: INFO: namespace e2e-tests-sched-pred-kgzkp deletion completed in 6.107600048s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70 • [SLOW TEST:7.290 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22 validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 17:20:05.912: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 6 17:20:06.018: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 17:20:07.092: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-custom-resource-definition-mxlz2" for this suite. May 6 17:20:13.105: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 17:20:13.117: INFO: namespace: e2e-tests-custom-resource-definition-mxlz2, resource: bindings, ignored listing per whitelist May 6 17:20:13.174: INFO: namespace e2e-tests-custom-resource-definition-mxlz2 deletion completed in 6.078547876s • [SLOW TEST:7.262 seconds] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 Simple CustomResourceDefinition /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:35 creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 17:20:13.174: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating the pod May 6 17:20:18.955: INFO: Successfully updated pod "labelsupdated92e8150-8fbd-11ea-a618-0242ac110019" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 17:20:21.259: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-k8k8g" for this suite. May 6 17:20:43.365: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 17:20:43.413: INFO: namespace: e2e-tests-downward-api-k8k8g, resource: bindings, ignored listing per whitelist May 6 17:20:43.441: INFO: namespace e2e-tests-downward-api-k8k8g deletion completed in 22.177643451s • [SLOW TEST:30.267 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 17:20:43.441: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0666 on node default medium May 6 17:20:43.536: INFO: Waiting up to 5m0s for pod "pod-ea8ffac5-8fbd-11ea-a618-0242ac110019" in namespace "e2e-tests-emptydir-4rc8s" to be "success or failure" May 6 17:20:43.557: INFO: Pod "pod-ea8ffac5-8fbd-11ea-a618-0242ac110019": Phase="Pending", Reason="", readiness=false. Elapsed: 21.318088ms May 6 17:20:45.561: INFO: Pod "pod-ea8ffac5-8fbd-11ea-a618-0242ac110019": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025256807s May 6 17:20:47.565: INFO: Pod "pod-ea8ffac5-8fbd-11ea-a618-0242ac110019": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.029134938s STEP: Saw pod success May 6 17:20:47.565: INFO: Pod "pod-ea8ffac5-8fbd-11ea-a618-0242ac110019" satisfied condition "success or failure" May 6 17:20:47.567: INFO: Trying to get logs from node hunter-worker pod pod-ea8ffac5-8fbd-11ea-a618-0242ac110019 container test-container: STEP: delete the pod May 6 17:20:48.021: INFO: Waiting for pod pod-ea8ffac5-8fbd-11ea-a618-0242ac110019 to disappear May 6 17:20:48.067: INFO: Pod pod-ea8ffac5-8fbd-11ea-a618-0242ac110019 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 17:20:48.067: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-4rc8s" for this suite. May 6 17:20:54.196: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 17:20:54.271: INFO: namespace: e2e-tests-emptydir-4rc8s, resource: bindings, ignored listing per whitelist May 6 17:20:54.308: INFO: namespace e2e-tests-emptydir-4rc8s deletion completed in 6.237956439s • [SLOW TEST:10.868 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 17:20:54.309: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0666 on tmpfs May 6 17:20:54.446: INFO: Waiting up to 5m0s for pod "pod-f10e05d8-8fbd-11ea-a618-0242ac110019" in namespace "e2e-tests-emptydir-mj8x5" to be "success or failure" May 6 17:20:54.456: INFO: Pod "pod-f10e05d8-8fbd-11ea-a618-0242ac110019": Phase="Pending", Reason="", readiness=false. Elapsed: 9.307767ms May 6 17:20:56.458: INFO: Pod "pod-f10e05d8-8fbd-11ea-a618-0242ac110019": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012199227s May 6 17:20:58.463: INFO: Pod "pod-f10e05d8-8fbd-11ea-a618-0242ac110019": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.016719468s STEP: Saw pod success May 6 17:20:58.463: INFO: Pod "pod-f10e05d8-8fbd-11ea-a618-0242ac110019" satisfied condition "success or failure" May 6 17:20:58.466: INFO: Trying to get logs from node hunter-worker pod pod-f10e05d8-8fbd-11ea-a618-0242ac110019 container test-container: STEP: delete the pod May 6 17:20:58.500: INFO: Waiting for pod pod-f10e05d8-8fbd-11ea-a618-0242ac110019 to disappear May 6 17:20:58.503: INFO: Pod pod-f10e05d8-8fbd-11ea-a618-0242ac110019 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 17:20:58.504: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-mj8x5" for this suite. May 6 17:21:04.519: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 17:21:04.527: INFO: namespace: e2e-tests-emptydir-mj8x5, resource: bindings, ignored listing per whitelist May 6 17:21:04.585: INFO: namespace e2e-tests-emptydir-mj8x5 deletion completed in 6.077770002s • [SLOW TEST:10.276 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 17:21:04.585: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin May 6 17:21:04.674: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f72a3c5b-8fbd-11ea-a618-0242ac110019" in namespace "e2e-tests-projected-sj8qw" to be "success or failure" May 6 17:21:04.690: INFO: Pod "downwardapi-volume-f72a3c5b-8fbd-11ea-a618-0242ac110019": Phase="Pending", Reason="", readiness=false. Elapsed: 16.330212ms May 6 17:21:06.695: INFO: Pod "downwardapi-volume-f72a3c5b-8fbd-11ea-a618-0242ac110019": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020812899s May 6 17:21:08.699: INFO: Pod "downwardapi-volume-f72a3c5b-8fbd-11ea-a618-0242ac110019": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024766226s STEP: Saw pod success May 6 17:21:08.699: INFO: Pod "downwardapi-volume-f72a3c5b-8fbd-11ea-a618-0242ac110019" satisfied condition "success or failure" May 6 17:21:08.702: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-f72a3c5b-8fbd-11ea-a618-0242ac110019 container client-container: STEP: delete the pod May 6 17:21:08.728: INFO: Waiting for pod downwardapi-volume-f72a3c5b-8fbd-11ea-a618-0242ac110019 to disappear May 6 17:21:08.732: INFO: Pod downwardapi-volume-f72a3c5b-8fbd-11ea-a618-0242ac110019 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 17:21:08.732: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-sj8qw" for this suite. May 6 17:21:14.787: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 17:21:14.879: INFO: namespace: e2e-tests-projected-sj8qw, resource: bindings, ignored listing per whitelist May 6 17:21:14.934: INFO: namespace e2e-tests-projected-sj8qw deletion completed in 6.179956382s • [SLOW TEST:10.348 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 17:21:14.934: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0777 on node default medium May 6 17:21:15.057: INFO: Waiting up to 5m0s for pod "pod-fd5c891c-8fbd-11ea-a618-0242ac110019" in namespace "e2e-tests-emptydir-hfkhc" to be "success or failure" May 6 17:21:15.076: INFO: Pod "pod-fd5c891c-8fbd-11ea-a618-0242ac110019": Phase="Pending", Reason="", readiness=false. Elapsed: 18.54015ms May 6 17:21:17.195: INFO: Pod "pod-fd5c891c-8fbd-11ea-a618-0242ac110019": Phase="Pending", Reason="", readiness=false. Elapsed: 2.137783279s May 6 17:21:19.200: INFO: Pod "pod-fd5c891c-8fbd-11ea-a618-0242ac110019": Phase="Running", Reason="", readiness=true. Elapsed: 4.142969209s May 6 17:21:21.204: INFO: Pod "pod-fd5c891c-8fbd-11ea-a618-0242ac110019": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.14703777s STEP: Saw pod success May 6 17:21:21.204: INFO: Pod "pod-fd5c891c-8fbd-11ea-a618-0242ac110019" satisfied condition "success or failure" May 6 17:21:21.208: INFO: Trying to get logs from node hunter-worker pod pod-fd5c891c-8fbd-11ea-a618-0242ac110019 container test-container: STEP: delete the pod May 6 17:21:21.230: INFO: Waiting for pod pod-fd5c891c-8fbd-11ea-a618-0242ac110019 to disappear May 6 17:21:21.284: INFO: Pod pod-fd5c891c-8fbd-11ea-a618-0242ac110019 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 17:21:21.284: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-hfkhc" for this suite. May 6 17:21:27.344: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 17:21:27.385: INFO: namespace: e2e-tests-emptydir-hfkhc, resource: bindings, ignored listing per whitelist May 6 17:21:27.414: INFO: namespace e2e-tests-emptydir-hfkhc deletion completed in 6.12656383s • [SLOW TEST:12.480 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl version should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 17:21:27.414: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 6 17:21:27.495: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version' May 6 17:21:27.626: INFO: stderr: "" May 6 17:21:27.626: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"13\", GitVersion:\"v1.13.12\", GitCommit:\"a8b52209ee172232b6db7a6e0ce2adc77458829f\", GitTreeState:\"clean\", BuildDate:\"2020-05-02T15:37:06Z\", GoVersion:\"go1.11.13\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"13\", GitVersion:\"v1.13.12\", GitCommit:\"a8b52209ee172232b6db7a6e0ce2adc77458829f\", GitTreeState:\"clean\", BuildDate:\"2020-01-14T01:07:14Z\", GoVersion:\"go1.11.13\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 17:21:27.626: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-5j7gv" for this suite. May 6 17:21:33.646: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 17:21:33.687: INFO: namespace: e2e-tests-kubectl-5j7gv, resource: bindings, ignored listing per whitelist May 6 17:21:33.739: INFO: namespace e2e-tests-kubectl-5j7gv deletion completed in 6.108191628s • [SLOW TEST:6.325 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl version /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 17:21:33.739: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with configmap pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod pod-subpath-test-configmap-lbtc STEP: Creating a pod to test atomic-volume-subpath May 6 17:21:33.848: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-lbtc" in namespace "e2e-tests-subpath-snt5b" to be "success or failure" May 6 17:21:33.871: INFO: Pod "pod-subpath-test-configmap-lbtc": Phase="Pending", Reason="", readiness=false. Elapsed: 23.681101ms May 6 17:21:35.876: INFO: Pod "pod-subpath-test-configmap-lbtc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028345089s May 6 17:21:37.879: INFO: Pod "pod-subpath-test-configmap-lbtc": Phase="Pending", Reason="", readiness=false. Elapsed: 4.031348519s May 6 17:21:39.884: INFO: Pod "pod-subpath-test-configmap-lbtc": Phase="Pending", Reason="", readiness=false. Elapsed: 6.03637105s May 6 17:21:41.889: INFO: Pod "pod-subpath-test-configmap-lbtc": Phase="Pending", Reason="", readiness=false. Elapsed: 8.041652362s May 6 17:21:43.894: INFO: Pod "pod-subpath-test-configmap-lbtc": Phase="Running", Reason="", readiness=false. Elapsed: 10.046132775s May 6 17:21:45.898: INFO: Pod "pod-subpath-test-configmap-lbtc": Phase="Running", Reason="", readiness=false. Elapsed: 12.050801942s May 6 17:21:47.902: INFO: Pod "pod-subpath-test-configmap-lbtc": Phase="Running", Reason="", readiness=false. Elapsed: 14.054841067s May 6 17:21:49.907: INFO: Pod "pod-subpath-test-configmap-lbtc": Phase="Running", Reason="", readiness=false. Elapsed: 16.05927571s May 6 17:21:51.911: INFO: Pod "pod-subpath-test-configmap-lbtc": Phase="Running", Reason="", readiness=false. Elapsed: 18.063781375s May 6 17:21:53.916: INFO: Pod "pod-subpath-test-configmap-lbtc": Phase="Running", Reason="", readiness=false. Elapsed: 20.068172349s May 6 17:21:55.920: INFO: Pod "pod-subpath-test-configmap-lbtc": Phase="Running", Reason="", readiness=false. Elapsed: 22.072219339s May 6 17:21:57.924: INFO: Pod "pod-subpath-test-configmap-lbtc": Phase="Running", Reason="", readiness=false. Elapsed: 24.076339204s May 6 17:21:59.928: INFO: Pod "pod-subpath-test-configmap-lbtc": Phase="Running", Reason="", readiness=false. Elapsed: 26.080340924s May 6 17:22:01.932: INFO: Pod "pod-subpath-test-configmap-lbtc": Phase="Running", Reason="", readiness=false. Elapsed: 28.084642131s May 6 17:22:03.939: INFO: Pod "pod-subpath-test-configmap-lbtc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 30.091863273s STEP: Saw pod success May 6 17:22:03.939: INFO: Pod "pod-subpath-test-configmap-lbtc" satisfied condition "success or failure" May 6 17:22:03.943: INFO: Trying to get logs from node hunter-worker2 pod pod-subpath-test-configmap-lbtc container test-container-subpath-configmap-lbtc: STEP: delete the pod May 6 17:22:03.982: INFO: Waiting for pod pod-subpath-test-configmap-lbtc to disappear May 6 17:22:03.997: INFO: Pod pod-subpath-test-configmap-lbtc no longer exists STEP: Deleting pod pod-subpath-test-configmap-lbtc May 6 17:22:03.997: INFO: Deleting pod "pod-subpath-test-configmap-lbtc" in namespace "e2e-tests-subpath-snt5b" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 17:22:03.999: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-subpath-snt5b" for this suite. May 6 17:22:10.012: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 17:22:10.023: INFO: namespace: e2e-tests-subpath-snt5b, resource: bindings, ignored listing per whitelist May 6 17:22:10.085: INFO: namespace e2e-tests-subpath-snt5b deletion completed in 6.082082367s • [SLOW TEST:36.346 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with configmap pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 17:22:10.085: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with secret that has name projected-secret-test-1e37f9ad-8fbe-11ea-a618-0242ac110019 STEP: Creating a pod to test consume secrets May 6 17:22:10.197: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-1e39e6e0-8fbe-11ea-a618-0242ac110019" in namespace "e2e-tests-projected-gcssj" to be "success or failure" May 6 17:22:10.218: INFO: Pod "pod-projected-secrets-1e39e6e0-8fbe-11ea-a618-0242ac110019": Phase="Pending", Reason="", readiness=false. Elapsed: 20.53417ms May 6 17:22:12.519: INFO: Pod "pod-projected-secrets-1e39e6e0-8fbe-11ea-a618-0242ac110019": Phase="Pending", Reason="", readiness=false. Elapsed: 2.321810067s May 6 17:22:14.523: INFO: Pod "pod-projected-secrets-1e39e6e0-8fbe-11ea-a618-0242ac110019": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.325535004s STEP: Saw pod success May 6 17:22:14.523: INFO: Pod "pod-projected-secrets-1e39e6e0-8fbe-11ea-a618-0242ac110019" satisfied condition "success or failure" May 6 17:22:14.525: INFO: Trying to get logs from node hunter-worker2 pod pod-projected-secrets-1e39e6e0-8fbe-11ea-a618-0242ac110019 container projected-secret-volume-test: STEP: delete the pod May 6 17:22:14.679: INFO: Waiting for pod pod-projected-secrets-1e39e6e0-8fbe-11ea-a618-0242ac110019 to disappear May 6 17:22:14.686: INFO: Pod pod-projected-secrets-1e39e6e0-8fbe-11ea-a618-0242ac110019 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 17:22:14.686: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-gcssj" for this suite. May 6 17:22:20.707: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 17:22:20.727: INFO: namespace: e2e-tests-projected-gcssj, resource: bindings, ignored listing per whitelist May 6 17:22:20.812: INFO: namespace e2e-tests-projected-gcssj deletion completed in 6.122597421s • [SLOW TEST:10.727 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 17:22:20.812: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin May 6 17:22:20.929: INFO: Waiting up to 5m0s for pod "downwardapi-volume-249faa3c-8fbe-11ea-a618-0242ac110019" in namespace "e2e-tests-downward-api-srz78" to be "success or failure" May 6 17:22:20.959: INFO: Pod "downwardapi-volume-249faa3c-8fbe-11ea-a618-0242ac110019": Phase="Pending", Reason="", readiness=false. Elapsed: 30.162268ms May 6 17:22:22.962: INFO: Pod "downwardapi-volume-249faa3c-8fbe-11ea-a618-0242ac110019": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03333409s May 6 17:22:24.967: INFO: Pod "downwardapi-volume-249faa3c-8fbe-11ea-a618-0242ac110019": Phase="Running", Reason="", readiness=true. Elapsed: 4.037857999s May 6 17:22:26.970: INFO: Pod "downwardapi-volume-249faa3c-8fbe-11ea-a618-0242ac110019": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.041335748s STEP: Saw pod success May 6 17:22:26.970: INFO: Pod "downwardapi-volume-249faa3c-8fbe-11ea-a618-0242ac110019" satisfied condition "success or failure" May 6 17:22:26.972: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-249faa3c-8fbe-11ea-a618-0242ac110019 container client-container: STEP: delete the pod May 6 17:22:27.029: INFO: Waiting for pod downwardapi-volume-249faa3c-8fbe-11ea-a618-0242ac110019 to disappear May 6 17:22:27.106: INFO: Pod downwardapi-volume-249faa3c-8fbe-11ea-a618-0242ac110019 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 17:22:27.106: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-srz78" for this suite. May 6 17:22:33.126: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 17:22:33.150: INFO: namespace: e2e-tests-downward-api-srz78, resource: bindings, ignored listing per whitelist May 6 17:22:33.204: INFO: namespace e2e-tests-downward-api-srz78 deletion completed in 6.094200789s • [SLOW TEST:12.392 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 17:22:33.205: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook May 6 17:22:41.423: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 6 17:22:41.426: INFO: Pod pod-with-poststart-exec-hook still exists May 6 17:22:43.426: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 6 17:22:43.430: INFO: Pod pod-with-poststart-exec-hook still exists May 6 17:22:45.426: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 6 17:22:45.431: INFO: Pod pod-with-poststart-exec-hook still exists May 6 17:22:47.426: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 6 17:22:47.437: INFO: Pod pod-with-poststart-exec-hook still exists May 6 17:22:49.426: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 6 17:22:49.430: INFO: Pod pod-with-poststart-exec-hook still exists May 6 17:22:51.426: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 6 17:22:51.430: INFO: Pod pod-with-poststart-exec-hook still exists May 6 17:22:53.426: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 6 17:22:53.430: INFO: Pod pod-with-poststart-exec-hook still exists May 6 17:22:55.426: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 6 17:22:55.430: INFO: Pod pod-with-poststart-exec-hook still exists May 6 17:22:57.426: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 6 17:22:57.471: INFO: Pod pod-with-poststart-exec-hook still exists May 6 17:22:59.426: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 6 17:22:59.430: INFO: Pod pod-with-poststart-exec-hook still exists May 6 17:23:01.426: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 6 17:23:01.431: INFO: Pod pod-with-poststart-exec-hook still exists May 6 17:23:03.426: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 6 17:23:03.430: INFO: Pod pod-with-poststart-exec-hook still exists May 6 17:23:05.426: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 6 17:23:05.430: INFO: Pod pod-with-poststart-exec-hook still exists May 6 17:23:07.426: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 6 17:23:07.430: INFO: Pod pod-with-poststart-exec-hook still exists May 6 17:23:09.426: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 6 17:23:09.430: INFO: Pod pod-with-poststart-exec-hook still exists May 6 17:23:11.426: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 6 17:23:11.429: INFO: Pod pod-with-poststart-exec-hook still exists May 6 17:23:13.426: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 6 17:23:13.430: INFO: Pod pod-with-poststart-exec-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 17:23:13.430: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-sbs59" for this suite. May 6 17:23:37.538: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 17:23:37.552: INFO: namespace: e2e-tests-container-lifecycle-hook-sbs59, resource: bindings, ignored listing per whitelist May 6 17:23:37.610: INFO: namespace e2e-tests-container-lifecycle-hook-sbs59 deletion completed in 24.176679676s • [SLOW TEST:64.406 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40 should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 17:23:37.611: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating pod May 6 17:23:43.823: INFO: Pod pod-hostip-5264189b-8fbe-11ea-a618-0242ac110019 has hostIP: 172.17.0.4 [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 17:23:43.823: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-rtmnp" for this suite. May 6 17:24:06.062: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 17:24:06.089: INFO: namespace: e2e-tests-pods-rtmnp, resource: bindings, ignored listing per whitelist May 6 17:24:06.151: INFO: namespace e2e-tests-pods-rtmnp deletion completed in 22.32424824s • [SLOW TEST:28.540 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 17:24:06.151: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin May 6 17:24:06.373: INFO: Waiting up to 5m0s for pod "downwardapi-volume-6378de60-8fbe-11ea-a618-0242ac110019" in namespace "e2e-tests-downward-api-xt99t" to be "success or failure" May 6 17:24:06.440: INFO: Pod "downwardapi-volume-6378de60-8fbe-11ea-a618-0242ac110019": Phase="Pending", Reason="", readiness=false. Elapsed: 66.485963ms May 6 17:24:08.444: INFO: Pod "downwardapi-volume-6378de60-8fbe-11ea-a618-0242ac110019": Phase="Pending", Reason="", readiness=false. Elapsed: 2.070680354s May 6 17:24:10.451: INFO: Pod "downwardapi-volume-6378de60-8fbe-11ea-a618-0242ac110019": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.07802555s STEP: Saw pod success May 6 17:24:10.451: INFO: Pod "downwardapi-volume-6378de60-8fbe-11ea-a618-0242ac110019" satisfied condition "success or failure" May 6 17:24:10.454: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-6378de60-8fbe-11ea-a618-0242ac110019 container client-container: STEP: delete the pod May 6 17:24:10.494: INFO: Waiting for pod downwardapi-volume-6378de60-8fbe-11ea-a618-0242ac110019 to disappear May 6 17:24:10.508: INFO: Pod downwardapi-volume-6378de60-8fbe-11ea-a618-0242ac110019 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 17:24:10.508: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-xt99t" for this suite. May 6 17:24:16.536: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 17:24:16.644: INFO: namespace: e2e-tests-downward-api-xt99t, resource: bindings, ignored listing per whitelist May 6 17:24:16.671: INFO: namespace e2e-tests-downward-api-xt99t deletion completed in 6.159945922s • [SLOW TEST:10.520 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 17:24:16.671: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a watch on configmaps STEP: creating a new configmap STEP: modifying the configmap once STEP: closing the watch once it receives two notifications May 6 17:24:16.853: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-fzwf9,SelfLink:/api/v1/namespaces/e2e-tests-watch-fzwf9/configmaps/e2e-watch-test-watch-closed,UID:69b34cce-8fbe-11ea-99e8-0242ac110002,ResourceVersion:9083326,Generation:0,CreationTimestamp:2020-05-06 17:24:16 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} May 6 17:24:16.853: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-fzwf9,SelfLink:/api/v1/namespaces/e2e-tests-watch-fzwf9/configmaps/e2e-watch-test-watch-closed,UID:69b34cce-8fbe-11ea-99e8-0242ac110002,ResourceVersion:9083327,Generation:0,CreationTimestamp:2020-05-06 17:24:16 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time, while the watch is closed STEP: creating a new watch on configmaps from the last resource version observed by the first watch STEP: deleting the configmap STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed May 6 17:24:16.888: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-fzwf9,SelfLink:/api/v1/namespaces/e2e-tests-watch-fzwf9/configmaps/e2e-watch-test-watch-closed,UID:69b34cce-8fbe-11ea-99e8-0242ac110002,ResourceVersion:9083328,Generation:0,CreationTimestamp:2020-05-06 17:24:16 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} May 6 17:24:16.888: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-fzwf9,SelfLink:/api/v1/namespaces/e2e-tests-watch-fzwf9/configmaps/e2e-watch-test-watch-closed,UID:69b34cce-8fbe-11ea-99e8-0242ac110002,ResourceVersion:9083329,Generation:0,CreationTimestamp:2020-05-06 17:24:16 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 17:24:16.888: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-watch-fzwf9" for this suite. May 6 17:24:22.952: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 17:24:22.968: INFO: namespace: e2e-tests-watch-fzwf9, resource: bindings, ignored listing per whitelist May 6 17:24:23.020: INFO: namespace e2e-tests-watch-fzwf9 deletion completed in 6.127409625s • [SLOW TEST:6.348 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 17:24:23.020: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts STEP: Waiting for a default service account to be provisioned in namespace [It] should test kubelet managed /etc/hosts file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Setting up the test STEP: Creating hostNetwork=false pod STEP: Creating hostNetwork=true pod STEP: Running the test STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false May 6 17:24:39.288: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-5wlqw PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 6 17:24:39.288: INFO: >>> kubeConfig: /root/.kube/config I0506 17:24:39.317668 6 log.go:172] (0xc0007c5ce0) (0xc001f89e00) Create stream I0506 17:24:39.317696 6 log.go:172] (0xc0007c5ce0) (0xc001f89e00) Stream added, broadcasting: 1 I0506 17:24:39.320396 6 log.go:172] (0xc0007c5ce0) Reply frame received for 1 I0506 17:24:39.320436 6 log.go:172] (0xc0007c5ce0) (0xc00128b0e0) Create stream I0506 17:24:39.320450 6 log.go:172] (0xc0007c5ce0) (0xc00128b0e0) Stream added, broadcasting: 3 I0506 17:24:39.321785 6 log.go:172] (0xc0007c5ce0) Reply frame received for 3 I0506 17:24:39.321821 6 log.go:172] (0xc0007c5ce0) (0xc001f89f40) Create stream I0506 17:24:39.321839 6 log.go:172] (0xc0007c5ce0) (0xc001f89f40) Stream added, broadcasting: 5 I0506 17:24:39.322794 6 log.go:172] (0xc0007c5ce0) Reply frame received for 5 I0506 17:24:39.394384 6 log.go:172] (0xc0007c5ce0) Data frame received for 5 I0506 17:24:39.394433 6 log.go:172] (0xc001f89f40) (5) Data frame handling I0506 17:24:39.394474 6 log.go:172] (0xc0007c5ce0) Data frame received for 3 I0506 17:24:39.394496 6 log.go:172] (0xc00128b0e0) (3) Data frame handling I0506 17:24:39.394508 6 log.go:172] (0xc00128b0e0) (3) Data frame sent I0506 17:24:39.394523 6 log.go:172] (0xc0007c5ce0) Data frame received for 3 I0506 17:24:39.394535 6 log.go:172] (0xc00128b0e0) (3) Data frame handling I0506 17:24:39.396111 6 log.go:172] (0xc0007c5ce0) Data frame received for 1 I0506 17:24:39.396138 6 log.go:172] (0xc001f89e00) (1) Data frame handling I0506 17:24:39.396166 6 log.go:172] (0xc001f89e00) (1) Data frame sent I0506 17:24:39.396187 6 log.go:172] (0xc0007c5ce0) (0xc001f89e00) Stream removed, broadcasting: 1 I0506 17:24:39.396213 6 log.go:172] (0xc0007c5ce0) Go away received I0506 17:24:39.396314 6 log.go:172] (0xc0007c5ce0) (0xc001f89e00) Stream removed, broadcasting: 1 I0506 17:24:39.396423 6 log.go:172] (0xc0007c5ce0) (0xc00128b0e0) Stream removed, broadcasting: 3 I0506 17:24:39.396439 6 log.go:172] (0xc0007c5ce0) (0xc001f89f40) Stream removed, broadcasting: 5 May 6 17:24:39.396: INFO: Exec stderr: "" May 6 17:24:39.396: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-5wlqw PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 6 17:24:39.396: INFO: >>> kubeConfig: /root/.kube/config I0506 17:24:39.426166 6 log.go:172] (0xc0002fba20) (0xc0019e4000) Create stream I0506 17:24:39.426211 6 log.go:172] (0xc0002fba20) (0xc0019e4000) Stream added, broadcasting: 1 I0506 17:24:39.428722 6 log.go:172] (0xc0002fba20) Reply frame received for 1 I0506 17:24:39.428753 6 log.go:172] (0xc0002fba20) (0xc000f3ab40) Create stream I0506 17:24:39.428764 6 log.go:172] (0xc0002fba20) (0xc000f3ab40) Stream added, broadcasting: 3 I0506 17:24:39.429969 6 log.go:172] (0xc0002fba20) Reply frame received for 3 I0506 17:24:39.430016 6 log.go:172] (0xc0002fba20) (0xc00128b180) Create stream I0506 17:24:39.430040 6 log.go:172] (0xc0002fba20) (0xc00128b180) Stream added, broadcasting: 5 I0506 17:24:39.430915 6 log.go:172] (0xc0002fba20) Reply frame received for 5 I0506 17:24:39.477823 6 log.go:172] (0xc0002fba20) Data frame received for 5 I0506 17:24:39.477859 6 log.go:172] (0xc00128b180) (5) Data frame handling I0506 17:24:39.477898 6 log.go:172] (0xc0002fba20) Data frame received for 3 I0506 17:24:39.477922 6 log.go:172] (0xc000f3ab40) (3) Data frame handling I0506 17:24:39.477946 6 log.go:172] (0xc000f3ab40) (3) Data frame sent I0506 17:24:39.477966 6 log.go:172] (0xc0002fba20) Data frame received for 3 I0506 17:24:39.477978 6 log.go:172] (0xc000f3ab40) (3) Data frame handling I0506 17:24:39.479392 6 log.go:172] (0xc0002fba20) Data frame received for 1 I0506 17:24:39.479421 6 log.go:172] (0xc0019e4000) (1) Data frame handling I0506 17:24:39.479444 6 log.go:172] (0xc0019e4000) (1) Data frame sent I0506 17:24:39.479464 6 log.go:172] (0xc0002fba20) (0xc0019e4000) Stream removed, broadcasting: 1 I0506 17:24:39.479493 6 log.go:172] (0xc0002fba20) Go away received I0506 17:24:39.479663 6 log.go:172] (0xc0002fba20) (0xc0019e4000) Stream removed, broadcasting: 1 I0506 17:24:39.479693 6 log.go:172] (0xc0002fba20) (0xc000f3ab40) Stream removed, broadcasting: 3 I0506 17:24:39.479702 6 log.go:172] (0xc0002fba20) (0xc00128b180) Stream removed, broadcasting: 5 May 6 17:24:39.479: INFO: Exec stderr: "" May 6 17:24:39.479: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-5wlqw PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 6 17:24:39.479: INFO: >>> kubeConfig: /root/.kube/config I0506 17:24:39.511318 6 log.go:172] (0xc0002fbef0) (0xc0019e4280) Create stream I0506 17:24:39.511357 6 log.go:172] (0xc0002fbef0) (0xc0019e4280) Stream added, broadcasting: 1 I0506 17:24:39.514534 6 log.go:172] (0xc0002fbef0) Reply frame received for 1 I0506 17:24:39.514570 6 log.go:172] (0xc0002fbef0) (0xc0019e4320) Create stream I0506 17:24:39.514581 6 log.go:172] (0xc0002fbef0) (0xc0019e4320) Stream added, broadcasting: 3 I0506 17:24:39.516073 6 log.go:172] (0xc0002fbef0) Reply frame received for 3 I0506 17:24:39.516169 6 log.go:172] (0xc0002fbef0) (0xc00128b220) Create stream I0506 17:24:39.516203 6 log.go:172] (0xc0002fbef0) (0xc00128b220) Stream added, broadcasting: 5 I0506 17:24:39.517876 6 log.go:172] (0xc0002fbef0) Reply frame received for 5 I0506 17:24:39.575053 6 log.go:172] (0xc0002fbef0) Data frame received for 5 I0506 17:24:39.575083 6 log.go:172] (0xc00128b220) (5) Data frame handling I0506 17:24:39.575110 6 log.go:172] (0xc0002fbef0) Data frame received for 3 I0506 17:24:39.575145 6 log.go:172] (0xc0019e4320) (3) Data frame handling I0506 17:24:39.575167 6 log.go:172] (0xc0019e4320) (3) Data frame sent I0506 17:24:39.575178 6 log.go:172] (0xc0002fbef0) Data frame received for 3 I0506 17:24:39.575185 6 log.go:172] (0xc0019e4320) (3) Data frame handling I0506 17:24:39.576266 6 log.go:172] (0xc0002fbef0) Data frame received for 1 I0506 17:24:39.576278 6 log.go:172] (0xc0019e4280) (1) Data frame handling I0506 17:24:39.576291 6 log.go:172] (0xc0019e4280) (1) Data frame sent I0506 17:24:39.576300 6 log.go:172] (0xc0002fbef0) (0xc0019e4280) Stream removed, broadcasting: 1 I0506 17:24:39.576350 6 log.go:172] (0xc0002fbef0) Go away received I0506 17:24:39.576388 6 log.go:172] (0xc0002fbef0) (0xc0019e4280) Stream removed, broadcasting: 1 I0506 17:24:39.576433 6 log.go:172] (0xc0002fbef0) (0xc0019e4320) Stream removed, broadcasting: 3 I0506 17:24:39.576465 6 log.go:172] (0xc0002fbef0) (0xc00128b220) Stream removed, broadcasting: 5 May 6 17:24:39.576: INFO: Exec stderr: "" May 6 17:24:39.576: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-5wlqw PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 6 17:24:39.576: INFO: >>> kubeConfig: /root/.kube/config I0506 17:24:39.601797 6 log.go:172] (0xc00194a420) (0xc0019e45a0) Create stream I0506 17:24:39.601821 6 log.go:172] (0xc00194a420) (0xc0019e45a0) Stream added, broadcasting: 1 I0506 17:24:39.604054 6 log.go:172] (0xc00194a420) Reply frame received for 1 I0506 17:24:39.604096 6 log.go:172] (0xc00194a420) (0xc00128b2c0) Create stream I0506 17:24:39.604106 6 log.go:172] (0xc00194a420) (0xc00128b2c0) Stream added, broadcasting: 3 I0506 17:24:39.604744 6 log.go:172] (0xc00194a420) Reply frame received for 3 I0506 17:24:39.604768 6 log.go:172] (0xc00194a420) (0xc000f3abe0) Create stream I0506 17:24:39.604777 6 log.go:172] (0xc00194a420) (0xc000f3abe0) Stream added, broadcasting: 5 I0506 17:24:39.605643 6 log.go:172] (0xc00194a420) Reply frame received for 5 I0506 17:24:39.676599 6 log.go:172] (0xc00194a420) Data frame received for 5 I0506 17:24:39.676641 6 log.go:172] (0xc000f3abe0) (5) Data frame handling I0506 17:24:39.676671 6 log.go:172] (0xc00194a420) Data frame received for 3 I0506 17:24:39.676685 6 log.go:172] (0xc00128b2c0) (3) Data frame handling I0506 17:24:39.676699 6 log.go:172] (0xc00128b2c0) (3) Data frame sent I0506 17:24:39.676713 6 log.go:172] (0xc00194a420) Data frame received for 3 I0506 17:24:39.676725 6 log.go:172] (0xc00128b2c0) (3) Data frame handling I0506 17:24:39.678307 6 log.go:172] (0xc00194a420) Data frame received for 1 I0506 17:24:39.678359 6 log.go:172] (0xc0019e45a0) (1) Data frame handling I0506 17:24:39.678398 6 log.go:172] (0xc0019e45a0) (1) Data frame sent I0506 17:24:39.678447 6 log.go:172] (0xc00194a420) (0xc0019e45a0) Stream removed, broadcasting: 1 I0506 17:24:39.678555 6 log.go:172] (0xc00194a420) Go away received I0506 17:24:39.678604 6 log.go:172] (0xc00194a420) (0xc0019e45a0) Stream removed, broadcasting: 1 I0506 17:24:39.678624 6 log.go:172] (0xc00194a420) (0xc00128b2c0) Stream removed, broadcasting: 3 I0506 17:24:39.678633 6 log.go:172] (0xc00194a420) (0xc000f3abe0) Stream removed, broadcasting: 5 May 6 17:24:39.678: INFO: Exec stderr: "" STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount May 6 17:24:39.678: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-5wlqw PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 6 17:24:39.678: INFO: >>> kubeConfig: /root/.kube/config I0506 17:24:39.710911 6 log.go:172] (0xc001b642c0) (0xc00128b4a0) Create stream I0506 17:24:39.710940 6 log.go:172] (0xc001b642c0) (0xc00128b4a0) Stream added, broadcasting: 1 I0506 17:24:39.728695 6 log.go:172] (0xc001b642c0) Reply frame received for 1 I0506 17:24:39.728751 6 log.go:172] (0xc001b642c0) (0xc001198500) Create stream I0506 17:24:39.728764 6 log.go:172] (0xc001b642c0) (0xc001198500) Stream added, broadcasting: 3 I0506 17:24:39.729875 6 log.go:172] (0xc001b642c0) Reply frame received for 3 I0506 17:24:39.729908 6 log.go:172] (0xc001b642c0) (0xc00128b540) Create stream I0506 17:24:39.729921 6 log.go:172] (0xc001b642c0) (0xc00128b540) Stream added, broadcasting: 5 I0506 17:24:39.731211 6 log.go:172] (0xc001b642c0) Reply frame received for 5 I0506 17:24:39.795545 6 log.go:172] (0xc001b642c0) Data frame received for 3 I0506 17:24:39.795578 6 log.go:172] (0xc001b642c0) Data frame received for 5 I0506 17:24:39.795603 6 log.go:172] (0xc00128b540) (5) Data frame handling I0506 17:24:39.795634 6 log.go:172] (0xc001198500) (3) Data frame handling I0506 17:24:39.795651 6 log.go:172] (0xc001198500) (3) Data frame sent I0506 17:24:39.795664 6 log.go:172] (0xc001b642c0) Data frame received for 3 I0506 17:24:39.795750 6 log.go:172] (0xc001198500) (3) Data frame handling I0506 17:24:39.796882 6 log.go:172] (0xc001b642c0) Data frame received for 1 I0506 17:24:39.796898 6 log.go:172] (0xc00128b4a0) (1) Data frame handling I0506 17:24:39.796911 6 log.go:172] (0xc00128b4a0) (1) Data frame sent I0506 17:24:39.796935 6 log.go:172] (0xc001b642c0) (0xc00128b4a0) Stream removed, broadcasting: 1 I0506 17:24:39.796951 6 log.go:172] (0xc001b642c0) Go away received I0506 17:24:39.797070 6 log.go:172] (0xc001b642c0) (0xc00128b4a0) Stream removed, broadcasting: 1 I0506 17:24:39.797086 6 log.go:172] (0xc001b642c0) (0xc001198500) Stream removed, broadcasting: 3 I0506 17:24:39.797096 6 log.go:172] (0xc001b642c0) (0xc00128b540) Stream removed, broadcasting: 5 May 6 17:24:39.797: INFO: Exec stderr: "" May 6 17:24:39.797: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-5wlqw PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 6 17:24:39.797: INFO: >>> kubeConfig: /root/.kube/config I0506 17:24:39.823880 6 log.go:172] (0xc0015082c0) (0xc001198780) Create stream I0506 17:24:39.823907 6 log.go:172] (0xc0015082c0) (0xc001198780) Stream added, broadcasting: 1 I0506 17:24:39.825832 6 log.go:172] (0xc0015082c0) Reply frame received for 1 I0506 17:24:39.825888 6 log.go:172] (0xc0015082c0) (0xc0019e46e0) Create stream I0506 17:24:39.825901 6 log.go:172] (0xc0015082c0) (0xc0019e46e0) Stream added, broadcasting: 3 I0506 17:24:39.826665 6 log.go:172] (0xc0015082c0) Reply frame received for 3 I0506 17:24:39.826702 6 log.go:172] (0xc0015082c0) (0xc0019e4780) Create stream I0506 17:24:39.826718 6 log.go:172] (0xc0015082c0) (0xc0019e4780) Stream added, broadcasting: 5 I0506 17:24:39.827481 6 log.go:172] (0xc0015082c0) Reply frame received for 5 I0506 17:24:39.892991 6 log.go:172] (0xc0015082c0) Data frame received for 5 I0506 17:24:39.893033 6 log.go:172] (0xc0015082c0) Data frame received for 3 I0506 17:24:39.893080 6 log.go:172] (0xc0019e46e0) (3) Data frame handling I0506 17:24:39.893292 6 log.go:172] (0xc0019e46e0) (3) Data frame sent I0506 17:24:39.893332 6 log.go:172] (0xc0019e4780) (5) Data frame handling I0506 17:24:39.893385 6 log.go:172] (0xc0015082c0) Data frame received for 3 I0506 17:24:39.893398 6 log.go:172] (0xc0019e46e0) (3) Data frame handling I0506 17:24:39.894776 6 log.go:172] (0xc0015082c0) Data frame received for 1 I0506 17:24:39.894789 6 log.go:172] (0xc001198780) (1) Data frame handling I0506 17:24:39.894795 6 log.go:172] (0xc001198780) (1) Data frame sent I0506 17:24:39.894986 6 log.go:172] (0xc0015082c0) (0xc001198780) Stream removed, broadcasting: 1 I0506 17:24:39.895074 6 log.go:172] (0xc0015082c0) Go away received I0506 17:24:39.895188 6 log.go:172] (0xc0015082c0) (0xc001198780) Stream removed, broadcasting: 1 I0506 17:24:39.895208 6 log.go:172] (0xc0015082c0) (0xc0019e46e0) Stream removed, broadcasting: 3 I0506 17:24:39.895219 6 log.go:172] (0xc0015082c0) (0xc0019e4780) Stream removed, broadcasting: 5 May 6 17:24:39.895: INFO: Exec stderr: "" STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true May 6 17:24:39.895: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-5wlqw PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 6 17:24:39.895: INFO: >>> kubeConfig: /root/.kube/config I0506 17:24:39.932143 6 log.go:172] (0xc00194a8f0) (0xc0019e4a00) Create stream I0506 17:24:39.932178 6 log.go:172] (0xc00194a8f0) (0xc0019e4a00) Stream added, broadcasting: 1 I0506 17:24:39.942311 6 log.go:172] (0xc00194a8f0) Reply frame received for 1 I0506 17:24:39.942366 6 log.go:172] (0xc00194a8f0) (0xc001fc6000) Create stream I0506 17:24:39.942381 6 log.go:172] (0xc00194a8f0) (0xc001fc6000) Stream added, broadcasting: 3 I0506 17:24:39.943111 6 log.go:172] (0xc00194a8f0) Reply frame received for 3 I0506 17:24:39.943148 6 log.go:172] (0xc00194a8f0) (0xc001f88000) Create stream I0506 17:24:39.943160 6 log.go:172] (0xc00194a8f0) (0xc001f88000) Stream added, broadcasting: 5 I0506 17:24:39.943943 6 log.go:172] (0xc00194a8f0) Reply frame received for 5 I0506 17:24:39.996586 6 log.go:172] (0xc00194a8f0) Data frame received for 5 I0506 17:24:39.996635 6 log.go:172] (0xc001f88000) (5) Data frame handling I0506 17:24:39.996679 6 log.go:172] (0xc00194a8f0) Data frame received for 3 I0506 17:24:39.996709 6 log.go:172] (0xc001fc6000) (3) Data frame handling I0506 17:24:39.996748 6 log.go:172] (0xc001fc6000) (3) Data frame sent I0506 17:24:39.996768 6 log.go:172] (0xc00194a8f0) Data frame received for 3 I0506 17:24:39.996786 6 log.go:172] (0xc001fc6000) (3) Data frame handling I0506 17:24:39.998630 6 log.go:172] (0xc00194a8f0) Data frame received for 1 I0506 17:24:39.998667 6 log.go:172] (0xc0019e4a00) (1) Data frame handling I0506 17:24:39.998694 6 log.go:172] (0xc0019e4a00) (1) Data frame sent I0506 17:24:39.998712 6 log.go:172] (0xc00194a8f0) (0xc0019e4a00) Stream removed, broadcasting: 1 I0506 17:24:39.998769 6 log.go:172] (0xc00194a8f0) Go away received I0506 17:24:39.998820 6 log.go:172] (0xc00194a8f0) (0xc0019e4a00) Stream removed, broadcasting: 1 I0506 17:24:39.998838 6 log.go:172] (0xc00194a8f0) (0xc001fc6000) Stream removed, broadcasting: 3 I0506 17:24:39.998852 6 log.go:172] (0xc00194a8f0) (0xc001f88000) Stream removed, broadcasting: 5 May 6 17:24:39.998: INFO: Exec stderr: "" May 6 17:24:39.998: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-5wlqw PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 6 17:24:39.998: INFO: >>> kubeConfig: /root/.kube/config I0506 17:24:40.028200 6 log.go:172] (0xc0007c5ad0) (0xc001fc6280) Create stream I0506 17:24:40.028226 6 log.go:172] (0xc0007c5ad0) (0xc001fc6280) Stream added, broadcasting: 1 I0506 17:24:40.030018 6 log.go:172] (0xc0007c5ad0) Reply frame received for 1 I0506 17:24:40.030067 6 log.go:172] (0xc0007c5ad0) (0xc001ec2000) Create stream I0506 17:24:40.030083 6 log.go:172] (0xc0007c5ad0) (0xc001ec2000) Stream added, broadcasting: 3 I0506 17:24:40.031046 6 log.go:172] (0xc0007c5ad0) Reply frame received for 3 I0506 17:24:40.031096 6 log.go:172] (0xc0007c5ad0) (0xc001734000) Create stream I0506 17:24:40.031116 6 log.go:172] (0xc0007c5ad0) (0xc001734000) Stream added, broadcasting: 5 I0506 17:24:40.032166 6 log.go:172] (0xc0007c5ad0) Reply frame received for 5 I0506 17:24:40.101872 6 log.go:172] (0xc0007c5ad0) Data frame received for 5 I0506 17:24:40.101912 6 log.go:172] (0xc001734000) (5) Data frame handling I0506 17:24:40.101966 6 log.go:172] (0xc0007c5ad0) Data frame received for 3 I0506 17:24:40.101989 6 log.go:172] (0xc001ec2000) (3) Data frame handling I0506 17:24:40.102002 6 log.go:172] (0xc001ec2000) (3) Data frame sent I0506 17:24:40.102014 6 log.go:172] (0xc0007c5ad0) Data frame received for 3 I0506 17:24:40.102023 6 log.go:172] (0xc001ec2000) (3) Data frame handling I0506 17:24:40.103522 6 log.go:172] (0xc0007c5ad0) Data frame received for 1 I0506 17:24:40.103562 6 log.go:172] (0xc001fc6280) (1) Data frame handling I0506 17:24:40.103603 6 log.go:172] (0xc001fc6280) (1) Data frame sent I0506 17:24:40.103636 6 log.go:172] (0xc0007c5ad0) (0xc001fc6280) Stream removed, broadcasting: 1 I0506 17:24:40.103667 6 log.go:172] (0xc0007c5ad0) Go away received I0506 17:24:40.103802 6 log.go:172] (0xc0007c5ad0) (0xc001fc6280) Stream removed, broadcasting: 1 I0506 17:24:40.103824 6 log.go:172] (0xc0007c5ad0) (0xc001ec2000) Stream removed, broadcasting: 3 I0506 17:24:40.103833 6 log.go:172] (0xc0007c5ad0) (0xc001734000) Stream removed, broadcasting: 5 May 6 17:24:40.103: INFO: Exec stderr: "" May 6 17:24:40.103: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-5wlqw PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 6 17:24:40.103: INFO: >>> kubeConfig: /root/.kube/config I0506 17:24:40.133003 6 log.go:172] (0xc0002fb6b0) (0xc001fc6500) Create stream I0506 17:24:40.133036 6 log.go:172] (0xc0002fb6b0) (0xc001fc6500) Stream added, broadcasting: 1 I0506 17:24:40.134907 6 log.go:172] (0xc0002fb6b0) Reply frame received for 1 I0506 17:24:40.134942 6 log.go:172] (0xc0002fb6b0) (0xc001ec20a0) Create stream I0506 17:24:40.134955 6 log.go:172] (0xc0002fb6b0) (0xc001ec20a0) Stream added, broadcasting: 3 I0506 17:24:40.135921 6 log.go:172] (0xc0002fb6b0) Reply frame received for 3 I0506 17:24:40.135972 6 log.go:172] (0xc0002fb6b0) (0xc0002d2000) Create stream I0506 17:24:40.135989 6 log.go:172] (0xc0002fb6b0) (0xc0002d2000) Stream added, broadcasting: 5 I0506 17:24:40.136855 6 log.go:172] (0xc0002fb6b0) Reply frame received for 5 I0506 17:24:40.204694 6 log.go:172] (0xc0002fb6b0) Data frame received for 5 I0506 17:24:40.204717 6 log.go:172] (0xc0002d2000) (5) Data frame handling I0506 17:24:40.204746 6 log.go:172] (0xc0002fb6b0) Data frame received for 3 I0506 17:24:40.204754 6 log.go:172] (0xc001ec20a0) (3) Data frame handling I0506 17:24:40.204766 6 log.go:172] (0xc001ec20a0) (3) Data frame sent I0506 17:24:40.204776 6 log.go:172] (0xc0002fb6b0) Data frame received for 3 I0506 17:24:40.204784 6 log.go:172] (0xc001ec20a0) (3) Data frame handling I0506 17:24:40.206221 6 log.go:172] (0xc0002fb6b0) Data frame received for 1 I0506 17:24:40.206237 6 log.go:172] (0xc001fc6500) (1) Data frame handling I0506 17:24:40.206246 6 log.go:172] (0xc001fc6500) (1) Data frame sent I0506 17:24:40.206395 6 log.go:172] (0xc0002fb6b0) (0xc001fc6500) Stream removed, broadcasting: 1 I0506 17:24:40.206423 6 log.go:172] (0xc0002fb6b0) Go away received I0506 17:24:40.206607 6 log.go:172] (0xc0002fb6b0) (0xc001fc6500) Stream removed, broadcasting: 1 I0506 17:24:40.206649 6 log.go:172] (0xc0002fb6b0) (0xc001ec20a0) Stream removed, broadcasting: 3 I0506 17:24:40.206670 6 log.go:172] (0xc0002fb6b0) (0xc0002d2000) Stream removed, broadcasting: 5 May 6 17:24:40.206: INFO: Exec stderr: "" May 6 17:24:40.206: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-5wlqw PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 6 17:24:40.206: INFO: >>> kubeConfig: /root/.kube/config I0506 17:24:40.237020 6 log.go:172] (0xc00194a2c0) (0xc001ec2320) Create stream I0506 17:24:40.237044 6 log.go:172] (0xc00194a2c0) (0xc001ec2320) Stream added, broadcasting: 1 I0506 17:24:40.238986 6 log.go:172] (0xc00194a2c0) Reply frame received for 1 I0506 17:24:40.239010 6 log.go:172] (0xc00194a2c0) (0xc0002d2140) Create stream I0506 17:24:40.239020 6 log.go:172] (0xc00194a2c0) (0xc0002d2140) Stream added, broadcasting: 3 I0506 17:24:40.239847 6 log.go:172] (0xc00194a2c0) Reply frame received for 3 I0506 17:24:40.239882 6 log.go:172] (0xc00194a2c0) (0xc0002d2280) Create stream I0506 17:24:40.239894 6 log.go:172] (0xc00194a2c0) (0xc0002d2280) Stream added, broadcasting: 5 I0506 17:24:40.240705 6 log.go:172] (0xc00194a2c0) Reply frame received for 5 I0506 17:24:40.313772 6 log.go:172] (0xc00194a2c0) Data frame received for 5 I0506 17:24:40.313823 6 log.go:172] (0xc0002d2280) (5) Data frame handling I0506 17:24:40.313844 6 log.go:172] (0xc00194a2c0) Data frame received for 3 I0506 17:24:40.313852 6 log.go:172] (0xc0002d2140) (3) Data frame handling I0506 17:24:40.313858 6 log.go:172] (0xc0002d2140) (3) Data frame sent I0506 17:24:40.313912 6 log.go:172] (0xc00194a2c0) Data frame received for 3 I0506 17:24:40.313930 6 log.go:172] (0xc0002d2140) (3) Data frame handling I0506 17:24:40.315491 6 log.go:172] (0xc00194a2c0) Data frame received for 1 I0506 17:24:40.315512 6 log.go:172] (0xc001ec2320) (1) Data frame handling I0506 17:24:40.315535 6 log.go:172] (0xc001ec2320) (1) Data frame sent I0506 17:24:40.315550 6 log.go:172] (0xc00194a2c0) (0xc001ec2320) Stream removed, broadcasting: 1 I0506 17:24:40.315641 6 log.go:172] (0xc00194a2c0) (0xc001ec2320) Stream removed, broadcasting: 1 I0506 17:24:40.315659 6 log.go:172] (0xc00194a2c0) Go away received I0506 17:24:40.315716 6 log.go:172] (0xc00194a2c0) (0xc0002d2140) Stream removed, broadcasting: 3 I0506 17:24:40.315782 6 log.go:172] (0xc00194a2c0) (0xc0002d2280) Stream removed, broadcasting: 5 May 6 17:24:40.315: INFO: Exec stderr: "" [AfterEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 17:24:40.315: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-e2e-kubelet-etc-hosts-5wlqw" for this suite. May 6 17:25:42.346: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 17:25:42.377: INFO: namespace: e2e-tests-e2e-kubelet-etc-hosts-5wlqw, resource: bindings, ignored listing per whitelist May 6 17:25:42.413: INFO: namespace e2e-tests-e2e-kubelet-etc-hosts-5wlqw deletion completed in 1m2.093747794s • [SLOW TEST:79.393 seconds] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should test kubelet managed /etc/hosts file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 17:25:42.413: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-9cc5f0fa-8fbe-11ea-a618-0242ac110019 STEP: Creating a pod to test consume secrets May 6 17:25:42.523: INFO: Waiting up to 5m0s for pod "pod-secrets-9cc86ad7-8fbe-11ea-a618-0242ac110019" in namespace "e2e-tests-secrets-nzcfb" to be "success or failure" May 6 17:25:42.527: INFO: Pod "pod-secrets-9cc86ad7-8fbe-11ea-a618-0242ac110019": Phase="Pending", Reason="", readiness=false. Elapsed: 4.515584ms May 6 17:25:44.532: INFO: Pod "pod-secrets-9cc86ad7-8fbe-11ea-a618-0242ac110019": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008565438s May 6 17:25:46.536: INFO: Pod "pod-secrets-9cc86ad7-8fbe-11ea-a618-0242ac110019": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012744347s STEP: Saw pod success May 6 17:25:46.536: INFO: Pod "pod-secrets-9cc86ad7-8fbe-11ea-a618-0242ac110019" satisfied condition "success or failure" May 6 17:25:46.539: INFO: Trying to get logs from node hunter-worker2 pod pod-secrets-9cc86ad7-8fbe-11ea-a618-0242ac110019 container secret-env-test: STEP: delete the pod May 6 17:25:46.588: INFO: Waiting for pod pod-secrets-9cc86ad7-8fbe-11ea-a618-0242ac110019 to disappear May 6 17:25:46.591: INFO: Pod pod-secrets-9cc86ad7-8fbe-11ea-a618-0242ac110019 no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 17:25:46.591: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-nzcfb" for this suite. May 6 17:25:52.609: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 17:25:52.650: INFO: namespace: e2e-tests-secrets-nzcfb, resource: bindings, ignored listing per whitelist May 6 17:25:52.686: INFO: namespace e2e-tests-secrets-nzcfb deletion completed in 6.09117318s • [SLOW TEST:10.273 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:32 should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 17:25:52.686: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-upd-a2eb8c7e-8fbe-11ea-a618-0242ac110019 STEP: Creating the pod STEP: Updating configmap configmap-test-upd-a2eb8c7e-8fbe-11ea-a618-0242ac110019 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 17:25:59.442: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-lh95r" for this suite. May 6 17:26:23.999: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 17:26:24.050: INFO: namespace: e2e-tests-configmap-lh95r, resource: bindings, ignored listing per whitelist May 6 17:26:24.070: INFO: namespace e2e-tests-configmap-lh95r deletion completed in 24.624429424s • [SLOW TEST:31.384 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 17:26:24.071: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward api env vars May 6 17:26:24.256: INFO: Waiting up to 5m0s for pod "downward-api-b5a5d467-8fbe-11ea-a618-0242ac110019" in namespace "e2e-tests-downward-api-nshvs" to be "success or failure" May 6 17:26:24.326: INFO: Pod "downward-api-b5a5d467-8fbe-11ea-a618-0242ac110019": Phase="Pending", Reason="", readiness=false. Elapsed: 70.013204ms May 6 17:26:26.343: INFO: Pod "downward-api-b5a5d467-8fbe-11ea-a618-0242ac110019": Phase="Pending", Reason="", readiness=false. Elapsed: 2.087402406s May 6 17:26:28.348: INFO: Pod "downward-api-b5a5d467-8fbe-11ea-a618-0242ac110019": Phase="Pending", Reason="", readiness=false. Elapsed: 4.091877825s May 6 17:26:30.353: INFO: Pod "downward-api-b5a5d467-8fbe-11ea-a618-0242ac110019": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.096700933s STEP: Saw pod success May 6 17:26:30.353: INFO: Pod "downward-api-b5a5d467-8fbe-11ea-a618-0242ac110019" satisfied condition "success or failure" May 6 17:26:30.355: INFO: Trying to get logs from node hunter-worker2 pod downward-api-b5a5d467-8fbe-11ea-a618-0242ac110019 container dapi-container: STEP: delete the pod May 6 17:26:30.382: INFO: Waiting for pod downward-api-b5a5d467-8fbe-11ea-a618-0242ac110019 to disappear May 6 17:26:30.417: INFO: Pod downward-api-b5a5d467-8fbe-11ea-a618-0242ac110019 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 17:26:30.417: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-nshvs" for this suite. May 6 17:26:36.731: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 17:26:36.776: INFO: namespace: e2e-tests-downward-api-nshvs, resource: bindings, ignored listing per whitelist May 6 17:26:36.897: INFO: namespace e2e-tests-downward-api-nshvs deletion completed in 6.476473891s • [SLOW TEST:12.826 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38 should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 17:26:36.897: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0666 on tmpfs May 6 17:26:37.066: INFO: Waiting up to 5m0s for pod "pod-bd48dc4e-8fbe-11ea-a618-0242ac110019" in namespace "e2e-tests-emptydir-kz29f" to be "success or failure" May 6 17:26:37.076: INFO: Pod "pod-bd48dc4e-8fbe-11ea-a618-0242ac110019": Phase="Pending", Reason="", readiness=false. Elapsed: 10.059552ms May 6 17:26:39.080: INFO: Pod "pod-bd48dc4e-8fbe-11ea-a618-0242ac110019": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014616096s May 6 17:26:41.085: INFO: Pod "pod-bd48dc4e-8fbe-11ea-a618-0242ac110019": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.019306126s STEP: Saw pod success May 6 17:26:41.085: INFO: Pod "pod-bd48dc4e-8fbe-11ea-a618-0242ac110019" satisfied condition "success or failure" May 6 17:26:41.088: INFO: Trying to get logs from node hunter-worker2 pod pod-bd48dc4e-8fbe-11ea-a618-0242ac110019 container test-container: STEP: delete the pod May 6 17:26:41.111: INFO: Waiting for pod pod-bd48dc4e-8fbe-11ea-a618-0242ac110019 to disappear May 6 17:26:41.140: INFO: Pod pod-bd48dc4e-8fbe-11ea-a618-0242ac110019 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 17:26:41.140: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-kz29f" for this suite. May 6 17:26:47.287: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 17:26:47.354: INFO: namespace: e2e-tests-emptydir-kz29f, resource: bindings, ignored listing per whitelist May 6 17:26:47.401: INFO: namespace e2e-tests-emptydir-kz29f deletion completed in 6.25815272s • [SLOW TEST:10.504 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 17:26:47.402: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 6 17:26:47.523: INFO: (0) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 5.08337ms) May 6 17:26:47.526: INFO: (1) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.117281ms) May 6 17:26:47.529: INFO: (2) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.344359ms) May 6 17:26:47.531: INFO: (3) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.383695ms) May 6 17:26:47.534: INFO: (4) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.577297ms) May 6 17:26:47.536: INFO: (5) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.247181ms) May 6 17:26:47.539: INFO: (6) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.31857ms) May 6 17:26:47.541: INFO: (7) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.702759ms) May 6 17:26:47.543: INFO: (8) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 1.983437ms) May 6 17:26:47.546: INFO: (9) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.768962ms) May 6 17:26:47.549: INFO: (10) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.862914ms) May 6 17:26:47.552: INFO: (11) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.591492ms) May 6 17:26:47.555: INFO: (12) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.117519ms) May 6 17:26:47.558: INFO: (13) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.765352ms) May 6 17:26:47.560: INFO: (14) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.748568ms) May 6 17:26:47.564: INFO: (15) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.387409ms) May 6 17:26:47.567: INFO: (16) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.739014ms) May 6 17:26:47.570: INFO: (17) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.191108ms) May 6 17:26:47.573: INFO: (18) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.537383ms) May 6 17:26:47.576: INFO: (19) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.029663ms) [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 17:26:47.577: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-proxy-tvdtt" for this suite. May 6 17:26:53.599: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 17:26:53.643: INFO: namespace: e2e-tests-proxy-tvdtt, resource: bindings, ignored listing per whitelist May 6 17:26:53.678: INFO: namespace e2e-tests-proxy-tvdtt deletion completed in 6.097398033s • [SLOW TEST:6.276 seconds] [sig-network] Proxy /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:56 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 17:26:53.678: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-map-c74ff13f-8fbe-11ea-a618-0242ac110019 STEP: Creating a pod to test consume configMaps May 6 17:26:53.952: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-c7532d7a-8fbe-11ea-a618-0242ac110019" in namespace "e2e-tests-projected-p7rmh" to be "success or failure" May 6 17:26:54.009: INFO: Pod "pod-projected-configmaps-c7532d7a-8fbe-11ea-a618-0242ac110019": Phase="Pending", Reason="", readiness=false. Elapsed: 57.83647ms May 6 17:26:56.014: INFO: Pod "pod-projected-configmaps-c7532d7a-8fbe-11ea-a618-0242ac110019": Phase="Pending", Reason="", readiness=false. Elapsed: 2.062205039s May 6 17:26:58.270: INFO: Pod "pod-projected-configmaps-c7532d7a-8fbe-11ea-a618-0242ac110019": Phase="Pending", Reason="", readiness=false. Elapsed: 4.318768748s May 6 17:27:00.275: INFO: Pod "pod-projected-configmaps-c7532d7a-8fbe-11ea-a618-0242ac110019": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.323159616s STEP: Saw pod success May 6 17:27:00.275: INFO: Pod "pod-projected-configmaps-c7532d7a-8fbe-11ea-a618-0242ac110019" satisfied condition "success or failure" May 6 17:27:00.278: INFO: Trying to get logs from node hunter-worker pod pod-projected-configmaps-c7532d7a-8fbe-11ea-a618-0242ac110019 container projected-configmap-volume-test: STEP: delete the pod May 6 17:27:00.306: INFO: Waiting for pod pod-projected-configmaps-c7532d7a-8fbe-11ea-a618-0242ac110019 to disappear May 6 17:27:01.015: INFO: Pod pod-projected-configmaps-c7532d7a-8fbe-11ea-a618-0242ac110019 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 17:27:01.015: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-p7rmh" for this suite. May 6 17:27:07.324: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 17:27:07.426: INFO: namespace: e2e-tests-projected-p7rmh, resource: bindings, ignored listing per whitelist May 6 17:27:07.426: INFO: namespace e2e-tests-projected-p7rmh deletion completed in 6.408125761s • [SLOW TEST:13.748 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 17:27:07.427: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin May 6 17:27:08.053: INFO: Waiting up to 5m0s for pod "downwardapi-volume-cfa35176-8fbe-11ea-a618-0242ac110019" in namespace "e2e-tests-projected-c452t" to be "success or failure" May 6 17:27:08.225: INFO: Pod "downwardapi-volume-cfa35176-8fbe-11ea-a618-0242ac110019": Phase="Pending", Reason="", readiness=false. Elapsed: 171.771681ms May 6 17:27:10.314: INFO: Pod "downwardapi-volume-cfa35176-8fbe-11ea-a618-0242ac110019": Phase="Pending", Reason="", readiness=false. Elapsed: 2.261247659s May 6 17:27:12.319: INFO: Pod "downwardapi-volume-cfa35176-8fbe-11ea-a618-0242ac110019": Phase="Running", Reason="", readiness=true. Elapsed: 4.265766723s May 6 17:27:14.323: INFO: Pod "downwardapi-volume-cfa35176-8fbe-11ea-a618-0242ac110019": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.269593541s STEP: Saw pod success May 6 17:27:14.323: INFO: Pod "downwardapi-volume-cfa35176-8fbe-11ea-a618-0242ac110019" satisfied condition "success or failure" May 6 17:27:14.326: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-cfa35176-8fbe-11ea-a618-0242ac110019 container client-container: STEP: delete the pod May 6 17:27:14.345: INFO: Waiting for pod downwardapi-volume-cfa35176-8fbe-11ea-a618-0242ac110019 to disappear May 6 17:27:14.350: INFO: Pod downwardapi-volume-cfa35176-8fbe-11ea-a618-0242ac110019 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 17:27:14.350: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-c452t" for this suite. May 6 17:27:20.364: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 17:27:20.427: INFO: namespace: e2e-tests-projected-c452t, resource: bindings, ignored listing per whitelist May 6 17:27:20.457: INFO: namespace e2e-tests-projected-c452t deletion completed in 6.104042999s • [SLOW TEST:13.031 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-network] DNS should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 17:27:20.457: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-48xnz A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.e2e-tests-dns-48xnz;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-48xnz A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.e2e-tests-dns-48xnz;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-48xnz.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.e2e-tests-dns-48xnz.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-48xnz.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.e2e-tests-dns-48xnz.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-48xnz.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-48xnz.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-48xnz.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.e2e-tests-dns-48xnz.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-48xnz.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.e2e-tests-dns-48xnz.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-48xnz.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.e2e-tests-dns-48xnz.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-48xnz.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 117.20.106.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.106.20.117_udp@PTR;check="$$(dig +tcp +noall +answer +search 117.20.106.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.106.20.117_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-48xnz A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.e2e-tests-dns-48xnz;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-48xnz A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.e2e-tests-dns-48xnz;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-48xnz.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.e2e-tests-dns-48xnz.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-48xnz.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.e2e-tests-dns-48xnz.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-48xnz.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-48xnz.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-48xnz.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-48xnz.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-48xnz.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.e2e-tests-dns-48xnz.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-48xnz.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.e2e-tests-dns-48xnz.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-48xnz.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 117.20.106.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.106.20.117_udp@PTR;check="$$(dig +tcp +noall +answer +search 117.20.106.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.106.20.117_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 6 17:27:37.222: INFO: Unable to read wheezy_tcp@dns-test-service.e2e-tests-dns-48xnz from pod e2e-tests-dns-48xnz/dns-test-d74142f9-8fbe-11ea-a618-0242ac110019: the server could not find the requested resource (get pods dns-test-d74142f9-8fbe-11ea-a618-0242ac110019) May 6 17:27:37.305: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-48xnz/dns-test-d74142f9-8fbe-11ea-a618-0242ac110019: the server could not find the requested resource (get pods dns-test-d74142f9-8fbe-11ea-a618-0242ac110019) May 6 17:27:37.308: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-48xnz/dns-test-d74142f9-8fbe-11ea-a618-0242ac110019: the server could not find the requested resource (get pods dns-test-d74142f9-8fbe-11ea-a618-0242ac110019) May 6 17:27:37.312: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-48xnz from pod e2e-tests-dns-48xnz/dns-test-d74142f9-8fbe-11ea-a618-0242ac110019: the server could not find the requested resource (get pods dns-test-d74142f9-8fbe-11ea-a618-0242ac110019) May 6 17:27:37.315: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-48xnz from pod e2e-tests-dns-48xnz/dns-test-d74142f9-8fbe-11ea-a618-0242ac110019: the server could not find the requested resource (get pods dns-test-d74142f9-8fbe-11ea-a618-0242ac110019) May 6 17:27:37.318: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-48xnz.svc from pod e2e-tests-dns-48xnz/dns-test-d74142f9-8fbe-11ea-a618-0242ac110019: the server could not find the requested resource (get pods dns-test-d74142f9-8fbe-11ea-a618-0242ac110019) May 6 17:27:37.320: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-48xnz.svc from pod e2e-tests-dns-48xnz/dns-test-d74142f9-8fbe-11ea-a618-0242ac110019: the server could not find the requested resource (get pods dns-test-d74142f9-8fbe-11ea-a618-0242ac110019) May 6 17:27:37.323: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-48xnz.svc from pod e2e-tests-dns-48xnz/dns-test-d74142f9-8fbe-11ea-a618-0242ac110019: the server could not find the requested resource (get pods dns-test-d74142f9-8fbe-11ea-a618-0242ac110019) May 6 17:27:37.325: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-48xnz.svc from pod e2e-tests-dns-48xnz/dns-test-d74142f9-8fbe-11ea-a618-0242ac110019: the server could not find the requested resource (get pods dns-test-d74142f9-8fbe-11ea-a618-0242ac110019) May 6 17:27:37.548: INFO: Lookups using e2e-tests-dns-48xnz/dns-test-d74142f9-8fbe-11ea-a618-0242ac110019 failed for: [wheezy_tcp@dns-test-service.e2e-tests-dns-48xnz jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-48xnz jessie_tcp@dns-test-service.e2e-tests-dns-48xnz jessie_udp@dns-test-service.e2e-tests-dns-48xnz.svc jessie_tcp@dns-test-service.e2e-tests-dns-48xnz.svc jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-48xnz.svc jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-48xnz.svc] May 6 17:27:42.563: INFO: Unable to read wheezy_tcp@dns-test-service.e2e-tests-dns-48xnz from pod e2e-tests-dns-48xnz/dns-test-d74142f9-8fbe-11ea-a618-0242ac110019: the server could not find the requested resource (get pods dns-test-d74142f9-8fbe-11ea-a618-0242ac110019) May 6 17:27:42.593: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-48xnz/dns-test-d74142f9-8fbe-11ea-a618-0242ac110019: the server could not find the requested resource (get pods dns-test-d74142f9-8fbe-11ea-a618-0242ac110019) May 6 17:27:42.595: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-48xnz/dns-test-d74142f9-8fbe-11ea-a618-0242ac110019: the server could not find the requested resource (get pods dns-test-d74142f9-8fbe-11ea-a618-0242ac110019) May 6 17:27:42.598: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-48xnz from pod e2e-tests-dns-48xnz/dns-test-d74142f9-8fbe-11ea-a618-0242ac110019: the server could not find the requested resource (get pods dns-test-d74142f9-8fbe-11ea-a618-0242ac110019) May 6 17:27:42.600: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-48xnz from pod e2e-tests-dns-48xnz/dns-test-d74142f9-8fbe-11ea-a618-0242ac110019: the server could not find the requested resource (get pods dns-test-d74142f9-8fbe-11ea-a618-0242ac110019) May 6 17:27:42.603: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-48xnz.svc from pod e2e-tests-dns-48xnz/dns-test-d74142f9-8fbe-11ea-a618-0242ac110019: the server could not find the requested resource (get pods dns-test-d74142f9-8fbe-11ea-a618-0242ac110019) May 6 17:27:42.606: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-48xnz.svc from pod e2e-tests-dns-48xnz/dns-test-d74142f9-8fbe-11ea-a618-0242ac110019: the server could not find the requested resource (get pods dns-test-d74142f9-8fbe-11ea-a618-0242ac110019) May 6 17:27:42.608: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-48xnz.svc from pod e2e-tests-dns-48xnz/dns-test-d74142f9-8fbe-11ea-a618-0242ac110019: the server could not find the requested resource (get pods dns-test-d74142f9-8fbe-11ea-a618-0242ac110019) May 6 17:27:42.612: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-48xnz.svc from pod e2e-tests-dns-48xnz/dns-test-d74142f9-8fbe-11ea-a618-0242ac110019: the server could not find the requested resource (get pods dns-test-d74142f9-8fbe-11ea-a618-0242ac110019) May 6 17:27:42.629: INFO: Lookups using e2e-tests-dns-48xnz/dns-test-d74142f9-8fbe-11ea-a618-0242ac110019 failed for: [wheezy_tcp@dns-test-service.e2e-tests-dns-48xnz jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-48xnz jessie_tcp@dns-test-service.e2e-tests-dns-48xnz jessie_udp@dns-test-service.e2e-tests-dns-48xnz.svc jessie_tcp@dns-test-service.e2e-tests-dns-48xnz.svc jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-48xnz.svc jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-48xnz.svc] May 6 17:27:47.564: INFO: Unable to read wheezy_tcp@dns-test-service.e2e-tests-dns-48xnz from pod e2e-tests-dns-48xnz/dns-test-d74142f9-8fbe-11ea-a618-0242ac110019: the server could not find the requested resource (get pods dns-test-d74142f9-8fbe-11ea-a618-0242ac110019) May 6 17:27:47.595: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-48xnz/dns-test-d74142f9-8fbe-11ea-a618-0242ac110019: the server could not find the requested resource (get pods dns-test-d74142f9-8fbe-11ea-a618-0242ac110019) May 6 17:27:47.599: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-48xnz/dns-test-d74142f9-8fbe-11ea-a618-0242ac110019: the server could not find the requested resource (get pods dns-test-d74142f9-8fbe-11ea-a618-0242ac110019) May 6 17:27:47.602: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-48xnz from pod e2e-tests-dns-48xnz/dns-test-d74142f9-8fbe-11ea-a618-0242ac110019: the server could not find the requested resource (get pods dns-test-d74142f9-8fbe-11ea-a618-0242ac110019) May 6 17:27:47.605: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-48xnz from pod e2e-tests-dns-48xnz/dns-test-d74142f9-8fbe-11ea-a618-0242ac110019: the server could not find the requested resource (get pods dns-test-d74142f9-8fbe-11ea-a618-0242ac110019) May 6 17:27:47.608: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-48xnz.svc from pod e2e-tests-dns-48xnz/dns-test-d74142f9-8fbe-11ea-a618-0242ac110019: the server could not find the requested resource (get pods dns-test-d74142f9-8fbe-11ea-a618-0242ac110019) May 6 17:27:47.612: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-48xnz.svc from pod e2e-tests-dns-48xnz/dns-test-d74142f9-8fbe-11ea-a618-0242ac110019: the server could not find the requested resource (get pods dns-test-d74142f9-8fbe-11ea-a618-0242ac110019) May 6 17:27:47.615: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-48xnz.svc from pod e2e-tests-dns-48xnz/dns-test-d74142f9-8fbe-11ea-a618-0242ac110019: the server could not find the requested resource (get pods dns-test-d74142f9-8fbe-11ea-a618-0242ac110019) May 6 17:27:47.617: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-48xnz.svc from pod e2e-tests-dns-48xnz/dns-test-d74142f9-8fbe-11ea-a618-0242ac110019: the server could not find the requested resource (get pods dns-test-d74142f9-8fbe-11ea-a618-0242ac110019) May 6 17:27:47.634: INFO: Lookups using e2e-tests-dns-48xnz/dns-test-d74142f9-8fbe-11ea-a618-0242ac110019 failed for: [wheezy_tcp@dns-test-service.e2e-tests-dns-48xnz jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-48xnz jessie_tcp@dns-test-service.e2e-tests-dns-48xnz jessie_udp@dns-test-service.e2e-tests-dns-48xnz.svc jessie_tcp@dns-test-service.e2e-tests-dns-48xnz.svc jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-48xnz.svc jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-48xnz.svc] May 6 17:27:52.563: INFO: Unable to read wheezy_tcp@dns-test-service.e2e-tests-dns-48xnz from pod e2e-tests-dns-48xnz/dns-test-d74142f9-8fbe-11ea-a618-0242ac110019: the server could not find the requested resource (get pods dns-test-d74142f9-8fbe-11ea-a618-0242ac110019) May 6 17:27:52.657: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-48xnz/dns-test-d74142f9-8fbe-11ea-a618-0242ac110019: the server could not find the requested resource (get pods dns-test-d74142f9-8fbe-11ea-a618-0242ac110019) May 6 17:27:52.661: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-48xnz/dns-test-d74142f9-8fbe-11ea-a618-0242ac110019: the server could not find the requested resource (get pods dns-test-d74142f9-8fbe-11ea-a618-0242ac110019) May 6 17:27:52.664: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-48xnz from pod e2e-tests-dns-48xnz/dns-test-d74142f9-8fbe-11ea-a618-0242ac110019: the server could not find the requested resource (get pods dns-test-d74142f9-8fbe-11ea-a618-0242ac110019) May 6 17:27:52.667: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-48xnz from pod e2e-tests-dns-48xnz/dns-test-d74142f9-8fbe-11ea-a618-0242ac110019: the server could not find the requested resource (get pods dns-test-d74142f9-8fbe-11ea-a618-0242ac110019) May 6 17:27:52.670: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-48xnz.svc from pod e2e-tests-dns-48xnz/dns-test-d74142f9-8fbe-11ea-a618-0242ac110019: the server could not find the requested resource (get pods dns-test-d74142f9-8fbe-11ea-a618-0242ac110019) May 6 17:27:52.672: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-48xnz.svc from pod e2e-tests-dns-48xnz/dns-test-d74142f9-8fbe-11ea-a618-0242ac110019: the server could not find the requested resource (get pods dns-test-d74142f9-8fbe-11ea-a618-0242ac110019) May 6 17:27:52.675: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-48xnz.svc from pod e2e-tests-dns-48xnz/dns-test-d74142f9-8fbe-11ea-a618-0242ac110019: the server could not find the requested resource (get pods dns-test-d74142f9-8fbe-11ea-a618-0242ac110019) May 6 17:27:52.678: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-48xnz.svc from pod e2e-tests-dns-48xnz/dns-test-d74142f9-8fbe-11ea-a618-0242ac110019: the server could not find the requested resource (get pods dns-test-d74142f9-8fbe-11ea-a618-0242ac110019) May 6 17:27:52.693: INFO: Lookups using e2e-tests-dns-48xnz/dns-test-d74142f9-8fbe-11ea-a618-0242ac110019 failed for: [wheezy_tcp@dns-test-service.e2e-tests-dns-48xnz jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-48xnz jessie_tcp@dns-test-service.e2e-tests-dns-48xnz jessie_udp@dns-test-service.e2e-tests-dns-48xnz.svc jessie_tcp@dns-test-service.e2e-tests-dns-48xnz.svc jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-48xnz.svc jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-48xnz.svc] May 6 17:27:57.560: INFO: Unable to read wheezy_tcp@dns-test-service.e2e-tests-dns-48xnz from pod e2e-tests-dns-48xnz/dns-test-d74142f9-8fbe-11ea-a618-0242ac110019: the server could not find the requested resource (get pods dns-test-d74142f9-8fbe-11ea-a618-0242ac110019) May 6 17:27:57.587: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-48xnz/dns-test-d74142f9-8fbe-11ea-a618-0242ac110019: the server could not find the requested resource (get pods dns-test-d74142f9-8fbe-11ea-a618-0242ac110019) May 6 17:27:57.589: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-48xnz/dns-test-d74142f9-8fbe-11ea-a618-0242ac110019: the server could not find the requested resource (get pods dns-test-d74142f9-8fbe-11ea-a618-0242ac110019) May 6 17:27:57.593: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-48xnz from pod e2e-tests-dns-48xnz/dns-test-d74142f9-8fbe-11ea-a618-0242ac110019: the server could not find the requested resource (get pods dns-test-d74142f9-8fbe-11ea-a618-0242ac110019) May 6 17:27:57.595: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-48xnz from pod e2e-tests-dns-48xnz/dns-test-d74142f9-8fbe-11ea-a618-0242ac110019: the server could not find the requested resource (get pods dns-test-d74142f9-8fbe-11ea-a618-0242ac110019) May 6 17:27:57.599: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-48xnz.svc from pod e2e-tests-dns-48xnz/dns-test-d74142f9-8fbe-11ea-a618-0242ac110019: the server could not find the requested resource (get pods dns-test-d74142f9-8fbe-11ea-a618-0242ac110019) May 6 17:27:57.601: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-48xnz.svc from pod e2e-tests-dns-48xnz/dns-test-d74142f9-8fbe-11ea-a618-0242ac110019: the server could not find the requested resource (get pods dns-test-d74142f9-8fbe-11ea-a618-0242ac110019) May 6 17:27:57.604: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-48xnz.svc from pod e2e-tests-dns-48xnz/dns-test-d74142f9-8fbe-11ea-a618-0242ac110019: the server could not find the requested resource (get pods dns-test-d74142f9-8fbe-11ea-a618-0242ac110019) May 6 17:27:57.606: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-48xnz.svc from pod e2e-tests-dns-48xnz/dns-test-d74142f9-8fbe-11ea-a618-0242ac110019: the server could not find the requested resource (get pods dns-test-d74142f9-8fbe-11ea-a618-0242ac110019) May 6 17:27:57.620: INFO: Lookups using e2e-tests-dns-48xnz/dns-test-d74142f9-8fbe-11ea-a618-0242ac110019 failed for: [wheezy_tcp@dns-test-service.e2e-tests-dns-48xnz jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-48xnz jessie_tcp@dns-test-service.e2e-tests-dns-48xnz jessie_udp@dns-test-service.e2e-tests-dns-48xnz.svc jessie_tcp@dns-test-service.e2e-tests-dns-48xnz.svc jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-48xnz.svc jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-48xnz.svc] May 6 17:28:02.772: INFO: DNS probes using e2e-tests-dns-48xnz/dns-test-d74142f9-8fbe-11ea-a618-0242ac110019 succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 17:28:03.846: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-dns-48xnz" for this suite. May 6 17:28:10.079: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 17:28:10.116: INFO: namespace: e2e-tests-dns-48xnz, resource: bindings, ignored listing per whitelist May 6 17:28:10.136: INFO: namespace e2e-tests-dns-48xnz deletion completed in 6.233158608s • [SLOW TEST:49.679 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 17:28:10.137: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should write entries to /etc/hosts [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 17:28:16.312: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubelet-test-xrhc9" for this suite. May 6 17:29:08.333: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 17:29:08.395: INFO: namespace: e2e-tests-kubelet-test-xrhc9, resource: bindings, ignored listing per whitelist May 6 17:29:08.403: INFO: namespace e2e-tests-kubelet-test-xrhc9 deletion completed in 52.087194593s • [SLOW TEST:58.267 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when scheduling a busybox Pod with hostAliases /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:136 should write entries to /etc/hosts [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 17:29:08.404: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0644 on tmpfs May 6 17:29:08.496: INFO: Waiting up to 5m0s for pod "pod-178c2c96-8fbf-11ea-a618-0242ac110019" in namespace "e2e-tests-emptydir-8jtnh" to be "success or failure" May 6 17:29:08.526: INFO: Pod "pod-178c2c96-8fbf-11ea-a618-0242ac110019": Phase="Pending", Reason="", readiness=false. Elapsed: 29.728452ms May 6 17:29:10.530: INFO: Pod "pod-178c2c96-8fbf-11ea-a618-0242ac110019": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033608042s May 6 17:29:12.534: INFO: Pod "pod-178c2c96-8fbf-11ea-a618-0242ac110019": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.03760449s STEP: Saw pod success May 6 17:29:12.534: INFO: Pod "pod-178c2c96-8fbf-11ea-a618-0242ac110019" satisfied condition "success or failure" May 6 17:29:12.536: INFO: Trying to get logs from node hunter-worker pod pod-178c2c96-8fbf-11ea-a618-0242ac110019 container test-container: STEP: delete the pod May 6 17:29:12.761: INFO: Waiting for pod pod-178c2c96-8fbf-11ea-a618-0242ac110019 to disappear May 6 17:29:12.806: INFO: Pod pod-178c2c96-8fbf-11ea-a618-0242ac110019 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 17:29:12.806: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-8jtnh" for this suite. May 6 17:29:19.008: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 17:29:19.081: INFO: namespace: e2e-tests-emptydir-8jtnh, resource: bindings, ignored listing per whitelist May 6 17:29:19.220: INFO: namespace e2e-tests-emptydir-8jtnh deletion completed in 6.410376002s • [SLOW TEST:10.817 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 17:29:19.220: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename prestop STEP: Waiting for a default service account to be provisioned in namespace [It] should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating server pod server in namespace e2e-tests-prestop-5k9l8 STEP: Waiting for pods to come up. STEP: Creating tester pod tester in namespace e2e-tests-prestop-5k9l8 STEP: Deleting pre-stop pod May 6 17:29:38.758: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": { "prestop": 1 }, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } STEP: Deleting the server pod [AfterEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 17:29:38.764: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-prestop-5k9l8" for this suite. May 6 17:30:18.806: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 17:30:18.869: INFO: namespace: e2e-tests-prestop-5k9l8, resource: bindings, ignored listing per whitelist May 6 17:30:18.875: INFO: namespace e2e-tests-prestop-5k9l8 deletion completed in 40.084517805s • [SLOW TEST:59.654 seconds] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl logs should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 17:30:18.875: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1134 STEP: creating an rc May 6 17:30:18.995: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-qbn8r' May 6 17:30:22.011: INFO: stderr: "" May 6 17:30:22.011: INFO: stdout: "replicationcontroller/redis-master created\n" [It] should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Waiting for Redis master to start. May 6 17:30:23.015: INFO: Selector matched 1 pods for map[app:redis] May 6 17:30:23.015: INFO: Found 0 / 1 May 6 17:30:24.015: INFO: Selector matched 1 pods for map[app:redis] May 6 17:30:24.015: INFO: Found 0 / 1 May 6 17:30:25.084: INFO: Selector matched 1 pods for map[app:redis] May 6 17:30:25.084: INFO: Found 0 / 1 May 6 17:30:26.015: INFO: Selector matched 1 pods for map[app:redis] May 6 17:30:26.015: INFO: Found 0 / 1 May 6 17:30:27.015: INFO: Selector matched 1 pods for map[app:redis] May 6 17:30:27.015: INFO: Found 0 / 1 May 6 17:30:28.016: INFO: Selector matched 1 pods for map[app:redis] May 6 17:30:28.016: INFO: Found 1 / 1 May 6 17:30:28.016: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 May 6 17:30:28.019: INFO: Selector matched 1 pods for map[app:redis] May 6 17:30:28.019: INFO: ForEach: Found 1 pods from the filter. Now looping through them. STEP: checking for a matching strings May 6 17:30:28.019: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-n56g2 redis-master --namespace=e2e-tests-kubectl-qbn8r' May 6 17:30:28.136: INFO: stderr: "" May 6 17:30:28.136: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 06 May 17:30:26.022 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 06 May 17:30:26.022 # Server started, Redis version 3.2.12\n1:M 06 May 17:30:26.022 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 06 May 17:30:26.022 * The server is now ready to accept connections on port 6379\n" STEP: limiting log lines May 6 17:30:28.136: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-n56g2 redis-master --namespace=e2e-tests-kubectl-qbn8r --tail=1' May 6 17:30:28.250: INFO: stderr: "" May 6 17:30:28.250: INFO: stdout: "1:M 06 May 17:30:26.022 * The server is now ready to accept connections on port 6379\n" STEP: limiting log bytes May 6 17:30:28.250: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-n56g2 redis-master --namespace=e2e-tests-kubectl-qbn8r --limit-bytes=1' May 6 17:30:28.393: INFO: stderr: "" May 6 17:30:28.393: INFO: stdout: " " STEP: exposing timestamps May 6 17:30:28.393: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-n56g2 redis-master --namespace=e2e-tests-kubectl-qbn8r --tail=1 --timestamps' May 6 17:30:28.525: INFO: stderr: "" May 6 17:30:28.525: INFO: stdout: "2020-05-06T17:30:26.02308924Z 1:M 06 May 17:30:26.022 * The server is now ready to accept connections on port 6379\n" STEP: restricting to a time range May 6 17:30:31.025: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-n56g2 redis-master --namespace=e2e-tests-kubectl-qbn8r --since=1s' May 6 17:30:31.144: INFO: stderr: "" May 6 17:30:31.144: INFO: stdout: "" May 6 17:30:31.144: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-n56g2 redis-master --namespace=e2e-tests-kubectl-qbn8r --since=24h' May 6 17:30:31.265: INFO: stderr: "" May 6 17:30:31.265: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 06 May 17:30:26.022 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 06 May 17:30:26.022 # Server started, Redis version 3.2.12\n1:M 06 May 17:30:26.022 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 06 May 17:30:26.022 * The server is now ready to accept connections on port 6379\n" [AfterEach] [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1140 STEP: using delete to clean up resources May 6 17:30:31.265: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-qbn8r' May 6 17:30:31.379: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 6 17:30:31.379: INFO: stdout: "replicationcontroller \"redis-master\" force deleted\n" May 6 17:30:31.379: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=nginx --no-headers --namespace=e2e-tests-kubectl-qbn8r' May 6 17:30:31.477: INFO: stderr: "No resources found.\n" May 6 17:30:31.477: INFO: stdout: "" May 6 17:30:31.477: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=nginx --namespace=e2e-tests-kubectl-qbn8r -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 6 17:30:31.748: INFO: stderr: "" May 6 17:30:31.748: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 17:30:31.748: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-qbn8r" for this suite. May 6 17:30:37.914: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 17:30:37.967: INFO: namespace: e2e-tests-kubectl-qbn8r, resource: bindings, ignored listing per whitelist May 6 17:30:37.984: INFO: namespace e2e-tests-kubectl-qbn8r deletion completed in 6.231174164s • [SLOW TEST:19.109 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 17:30:37.985: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin May 6 17:30:38.116: INFO: Waiting up to 5m0s for pod "downwardapi-volume-4cf8189d-8fbf-11ea-a618-0242ac110019" in namespace "e2e-tests-projected-8fq2z" to be "success or failure" May 6 17:30:38.138: INFO: Pod "downwardapi-volume-4cf8189d-8fbf-11ea-a618-0242ac110019": Phase="Pending", Reason="", readiness=false. Elapsed: 22.749038ms May 6 17:30:40.143: INFO: Pod "downwardapi-volume-4cf8189d-8fbf-11ea-a618-0242ac110019": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027153312s May 6 17:30:42.204: INFO: Pod "downwardapi-volume-4cf8189d-8fbf-11ea-a618-0242ac110019": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.088836926s STEP: Saw pod success May 6 17:30:42.204: INFO: Pod "downwardapi-volume-4cf8189d-8fbf-11ea-a618-0242ac110019" satisfied condition "success or failure" May 6 17:30:42.208: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-4cf8189d-8fbf-11ea-a618-0242ac110019 container client-container: STEP: delete the pod May 6 17:30:42.385: INFO: Waiting for pod downwardapi-volume-4cf8189d-8fbf-11ea-a618-0242ac110019 to disappear May 6 17:30:42.418: INFO: Pod downwardapi-volume-4cf8189d-8fbf-11ea-a618-0242ac110019 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 17:30:42.418: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-8fq2z" for this suite. May 6 17:30:48.435: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 17:30:48.442: INFO: namespace: e2e-tests-projected-8fq2z, resource: bindings, ignored listing per whitelist May 6 17:30:48.522: INFO: namespace e2e-tests-projected-8fq2z deletion completed in 6.100268343s • [SLOW TEST:10.537 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 17:30:48.522: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74 STEP: Creating service test in namespace e2e-tests-statefulset-87wv9 [It] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a new StaefulSet May 6 17:30:48.654: INFO: Found 0 stateful pods, waiting for 3 May 6 17:30:58.658: INFO: Found 2 stateful pods, waiting for 3 May 6 17:31:08.660: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 6 17:31:08.660: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 6 17:31:08.660: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Updating stateful set template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine May 6 17:31:08.709: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Not applying an update when the partition is greater than the number of replicas STEP: Performing a canary update May 6 17:31:18.751: INFO: Updating stateful set ss2 May 6 17:31:18.776: INFO: Waiting for Pod e2e-tests-statefulset-87wv9/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c May 6 17:31:28.871: INFO: Waiting for Pod e2e-tests-statefulset-87wv9/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c STEP: Restoring Pods to the correct revision when they are deleted May 6 17:31:38.947: INFO: Found 2 stateful pods, waiting for 3 May 6 17:31:48.953: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 6 17:31:48.953: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 6 17:31:48.953: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Performing a phased rolling update May 6 17:31:48.978: INFO: Updating stateful set ss2 May 6 17:31:49.003: INFO: Waiting for Pod e2e-tests-statefulset-87wv9/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c May 6 17:31:59.029: INFO: Updating stateful set ss2 May 6 17:31:59.082: INFO: Waiting for StatefulSet e2e-tests-statefulset-87wv9/ss2 to complete update May 6 17:31:59.082: INFO: Waiting for Pod e2e-tests-statefulset-87wv9/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c May 6 17:32:09.091: INFO: Waiting for StatefulSet e2e-tests-statefulset-87wv9/ss2 to complete update May 6 17:32:09.092: INFO: Waiting for Pod e2e-tests-statefulset-87wv9/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85 May 6 17:32:19.091: INFO: Deleting all statefulset in ns e2e-tests-statefulset-87wv9 May 6 17:32:19.094: INFO: Scaling statefulset ss2 to 0 May 6 17:32:59.115: INFO: Waiting for statefulset status.replicas updated to 0 May 6 17:32:59.118: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 17:32:59.160: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-statefulset-87wv9" for this suite. May 6 17:33:07.187: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 17:33:07.202: INFO: namespace: e2e-tests-statefulset-87wv9, resource: bindings, ignored listing per whitelist May 6 17:33:07.248: INFO: namespace e2e-tests-statefulset-87wv9 deletion completed in 8.083934882s • [SLOW TEST:138.726 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 17:33:07.248: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 6 17:33:07.363: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 17:33:11.392: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-vgs8x" for this suite. May 6 17:33:53.527: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 17:33:53.543: INFO: namespace: e2e-tests-pods-vgs8x, resource: bindings, ignored listing per whitelist May 6 17:33:53.602: INFO: namespace e2e-tests-pods-vgs8x deletion completed in 42.207125389s • [SLOW TEST:46.354 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 17:33:53.602: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102 [It] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 6 17:33:53.953: INFO: Creating daemon "daemon-set" with a node selector STEP: Initially, daemon pods should not be running on any nodes. May 6 17:33:54.006: INFO: Number of nodes with available pods: 0 May 6 17:33:54.006: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Change node label to blue, check that daemon pod is launched. May 6 17:33:54.084: INFO: Number of nodes with available pods: 0 May 6 17:33:54.084: INFO: Node hunter-worker is running more than one daemon pod May 6 17:33:55.088: INFO: Number of nodes with available pods: 0 May 6 17:33:55.088: INFO: Node hunter-worker is running more than one daemon pod May 6 17:33:56.088: INFO: Number of nodes with available pods: 0 May 6 17:33:56.088: INFO: Node hunter-worker is running more than one daemon pod May 6 17:33:57.088: INFO: Number of nodes with available pods: 1 May 6 17:33:57.088: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Update the node label to green, and wait for daemons to be unscheduled May 6 17:33:57.159: INFO: Number of nodes with available pods: 1 May 6 17:33:57.160: INFO: Number of running nodes: 0, number of available pods: 1 May 6 17:33:58.164: INFO: Number of nodes with available pods: 0 May 6 17:33:58.164: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate May 6 17:33:58.189: INFO: Number of nodes with available pods: 0 May 6 17:33:58.189: INFO: Node hunter-worker is running more than one daemon pod May 6 17:33:59.591: INFO: Number of nodes with available pods: 0 May 6 17:33:59.591: INFO: Node hunter-worker is running more than one daemon pod May 6 17:34:00.193: INFO: Number of nodes with available pods: 0 May 6 17:34:00.193: INFO: Node hunter-worker is running more than one daemon pod May 6 17:34:01.192: INFO: Number of nodes with available pods: 0 May 6 17:34:01.192: INFO: Node hunter-worker is running more than one daemon pod May 6 17:34:02.194: INFO: Number of nodes with available pods: 0 May 6 17:34:02.194: INFO: Node hunter-worker is running more than one daemon pod May 6 17:34:03.192: INFO: Number of nodes with available pods: 0 May 6 17:34:03.192: INFO: Node hunter-worker is running more than one daemon pod May 6 17:34:04.193: INFO: Number of nodes with available pods: 0 May 6 17:34:04.193: INFO: Node hunter-worker is running more than one daemon pod May 6 17:34:05.193: INFO: Number of nodes with available pods: 1 May 6 17:34:05.193: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-976xp, will wait for the garbage collector to delete the pods May 6 17:34:05.259: INFO: Deleting DaemonSet.extensions daemon-set took: 5.979068ms May 6 17:34:05.359: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.198665ms May 6 17:34:09.762: INFO: Number of nodes with available pods: 0 May 6 17:34:09.762: INFO: Number of running nodes: 0, number of available pods: 0 May 6 17:34:09.765: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-976xp/daemonsets","resourceVersion":"9085202"},"items":null} May 6 17:34:09.768: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-976xp/pods","resourceVersion":"9085202"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 17:34:09.849: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-daemonsets-976xp" for this suite. May 6 17:34:15.871: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 17:34:16.003: INFO: namespace: e2e-tests-daemonsets-976xp, resource: bindings, ignored listing per whitelist May 6 17:34:16.009: INFO: namespace e2e-tests-daemonsets-976xp deletion completed in 6.155487626s • [SLOW TEST:22.407 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 17:34:16.009: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with projected pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod pod-subpath-test-projected-lz4b STEP: Creating a pod to test atomic-volume-subpath May 6 17:34:16.262: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-lz4b" in namespace "e2e-tests-subpath-5bfzq" to be "success or failure" May 6 17:34:16.268: INFO: Pod "pod-subpath-test-projected-lz4b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.251244ms May 6 17:34:18.340: INFO: Pod "pod-subpath-test-projected-lz4b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.077563397s May 6 17:34:20.343: INFO: Pod "pod-subpath-test-projected-lz4b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.081370038s May 6 17:34:22.790: INFO: Pod "pod-subpath-test-projected-lz4b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.528124294s May 6 17:34:24.795: INFO: Pod "pod-subpath-test-projected-lz4b": Phase="Running", Reason="", readiness=false. Elapsed: 8.532867807s May 6 17:34:26.799: INFO: Pod "pod-subpath-test-projected-lz4b": Phase="Running", Reason="", readiness=false. Elapsed: 10.537400371s May 6 17:34:28.804: INFO: Pod "pod-subpath-test-projected-lz4b": Phase="Running", Reason="", readiness=false. Elapsed: 12.541923498s May 6 17:34:30.808: INFO: Pod "pod-subpath-test-projected-lz4b": Phase="Running", Reason="", readiness=false. Elapsed: 14.546347513s May 6 17:34:32.813: INFO: Pod "pod-subpath-test-projected-lz4b": Phase="Running", Reason="", readiness=false. Elapsed: 16.550921134s May 6 17:34:34.817: INFO: Pod "pod-subpath-test-projected-lz4b": Phase="Running", Reason="", readiness=false. Elapsed: 18.555328291s May 6 17:34:36.822: INFO: Pod "pod-subpath-test-projected-lz4b": Phase="Running", Reason="", readiness=false. Elapsed: 20.560039445s May 6 17:34:38.827: INFO: Pod "pod-subpath-test-projected-lz4b": Phase="Running", Reason="", readiness=false. Elapsed: 22.565058471s May 6 17:34:40.832: INFO: Pod "pod-subpath-test-projected-lz4b": Phase="Running", Reason="", readiness=false. Elapsed: 24.570148526s May 6 17:34:42.836: INFO: Pod "pod-subpath-test-projected-lz4b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.574203272s STEP: Saw pod success May 6 17:34:42.836: INFO: Pod "pod-subpath-test-projected-lz4b" satisfied condition "success or failure" May 6 17:34:42.839: INFO: Trying to get logs from node hunter-worker pod pod-subpath-test-projected-lz4b container test-container-subpath-projected-lz4b: STEP: delete the pod May 6 17:34:42.901: INFO: Waiting for pod pod-subpath-test-projected-lz4b to disappear May 6 17:34:42.935: INFO: Pod pod-subpath-test-projected-lz4b no longer exists STEP: Deleting pod pod-subpath-test-projected-lz4b May 6 17:34:42.935: INFO: Deleting pod "pod-subpath-test-projected-lz4b" in namespace "e2e-tests-subpath-5bfzq" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 17:34:42.937: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-subpath-5bfzq" for this suite. May 6 17:34:49.069: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 17:34:49.125: INFO: namespace: e2e-tests-subpath-5bfzq, resource: bindings, ignored listing per whitelist May 6 17:34:49.147: INFO: namespace e2e-tests-subpath-5bfzq deletion completed in 6.106504192s • [SLOW TEST:33.138 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with projected pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 17:34:49.148: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward api env vars May 6 17:34:49.301: INFO: Waiting up to 5m0s for pod "downward-api-e2ae7b8f-8fbf-11ea-a618-0242ac110019" in namespace "e2e-tests-downward-api-qmbfp" to be "success or failure" May 6 17:34:49.305: INFO: Pod "downward-api-e2ae7b8f-8fbf-11ea-a618-0242ac110019": Phase="Pending", Reason="", readiness=false. Elapsed: 3.884829ms May 6 17:34:51.378: INFO: Pod "downward-api-e2ae7b8f-8fbf-11ea-a618-0242ac110019": Phase="Pending", Reason="", readiness=false. Elapsed: 2.076434979s May 6 17:34:54.073: INFO: Pod "downward-api-e2ae7b8f-8fbf-11ea-a618-0242ac110019": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.771680517s STEP: Saw pod success May 6 17:34:54.073: INFO: Pod "downward-api-e2ae7b8f-8fbf-11ea-a618-0242ac110019" satisfied condition "success or failure" May 6 17:34:54.075: INFO: Trying to get logs from node hunter-worker2 pod downward-api-e2ae7b8f-8fbf-11ea-a618-0242ac110019 container dapi-container: STEP: delete the pod May 6 17:34:54.306: INFO: Waiting for pod downward-api-e2ae7b8f-8fbf-11ea-a618-0242ac110019 to disappear May 6 17:34:54.311: INFO: Pod downward-api-e2ae7b8f-8fbf-11ea-a618-0242ac110019 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 17:34:54.311: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-qmbfp" for this suite. May 6 17:35:00.399: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 17:35:00.436: INFO: namespace: e2e-tests-downward-api-qmbfp, resource: bindings, ignored listing per whitelist May 6 17:35:00.493: INFO: namespace e2e-tests-downward-api-qmbfp deletion completed in 6.178295356s • [SLOW TEST:11.345 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38 should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-network] Services should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 17:35:00.493: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:85 [It] should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating service multi-endpoint-test in namespace e2e-tests-services-gmnc8 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-gmnc8 to expose endpoints map[] May 6 17:35:01.043: INFO: Get endpoints failed (2.633382ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found May 6 17:35:02.048: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-gmnc8 exposes endpoints map[] (1.007309633s elapsed) STEP: Creating pod pod1 in namespace e2e-tests-services-gmnc8 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-gmnc8 to expose endpoints map[pod1:[100]] May 6 17:35:05.122: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-gmnc8 exposes endpoints map[pod1:[100]] (3.066522969s elapsed) STEP: Creating pod pod2 in namespace e2e-tests-services-gmnc8 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-gmnc8 to expose endpoints map[pod1:[100] pod2:[101]] May 6 17:35:08.205: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-gmnc8 exposes endpoints map[pod1:[100] pod2:[101]] (3.079835225s elapsed) STEP: Deleting pod pod1 in namespace e2e-tests-services-gmnc8 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-gmnc8 to expose endpoints map[pod2:[101]] May 6 17:35:09.255: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-gmnc8 exposes endpoints map[pod2:[101]] (1.04556222s elapsed) STEP: Deleting pod pod2 in namespace e2e-tests-services-gmnc8 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-gmnc8 to expose endpoints map[] May 6 17:35:10.308: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-gmnc8 exposes endpoints map[] (1.048369669s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 17:35:10.490: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-services-gmnc8" for this suite. May 6 17:35:32.584: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 17:35:32.614: INFO: namespace: e2e-tests-services-gmnc8, resource: bindings, ignored listing per whitelist May 6 17:35:32.674: INFO: namespace e2e-tests-services-gmnc8 deletion completed in 22.156809799s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:90 • [SLOW TEST:32.181 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 17:35:32.675: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: getting the auto-created API token STEP: Creating a pod to test consume service account token May 6 17:35:33.328: INFO: Waiting up to 5m0s for pod "pod-service-account-fcee1641-8fbf-11ea-a618-0242ac110019-hwdsz" in namespace "e2e-tests-svcaccounts-7d2j2" to be "success or failure" May 6 17:35:33.336: INFO: Pod "pod-service-account-fcee1641-8fbf-11ea-a618-0242ac110019-hwdsz": Phase="Pending", Reason="", readiness=false. Elapsed: 8.321599ms May 6 17:35:35.340: INFO: Pod "pod-service-account-fcee1641-8fbf-11ea-a618-0242ac110019-hwdsz": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011857693s May 6 17:35:37.343: INFO: Pod "pod-service-account-fcee1641-8fbf-11ea-a618-0242ac110019-hwdsz": Phase="Pending", Reason="", readiness=false. Elapsed: 4.014752441s May 6 17:35:39.599: INFO: Pod "pod-service-account-fcee1641-8fbf-11ea-a618-0242ac110019-hwdsz": Phase="Pending", Reason="", readiness=false. Elapsed: 6.270852001s May 6 17:35:41.603: INFO: Pod "pod-service-account-fcee1641-8fbf-11ea-a618-0242ac110019-hwdsz": Phase="Running", Reason="", readiness=false. Elapsed: 8.2748058s May 6 17:35:43.608: INFO: Pod "pod-service-account-fcee1641-8fbf-11ea-a618-0242ac110019-hwdsz": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.27966029s STEP: Saw pod success May 6 17:35:43.608: INFO: Pod "pod-service-account-fcee1641-8fbf-11ea-a618-0242ac110019-hwdsz" satisfied condition "success or failure" May 6 17:35:43.611: INFO: Trying to get logs from node hunter-worker pod pod-service-account-fcee1641-8fbf-11ea-a618-0242ac110019-hwdsz container token-test: STEP: delete the pod May 6 17:35:43.647: INFO: Waiting for pod pod-service-account-fcee1641-8fbf-11ea-a618-0242ac110019-hwdsz to disappear May 6 17:35:43.675: INFO: Pod pod-service-account-fcee1641-8fbf-11ea-a618-0242ac110019-hwdsz no longer exists STEP: Creating a pod to test consume service account root CA May 6 17:35:43.680: INFO: Waiting up to 5m0s for pod "pod-service-account-fcee1641-8fbf-11ea-a618-0242ac110019-lm6pd" in namespace "e2e-tests-svcaccounts-7d2j2" to be "success or failure" May 6 17:35:43.684: INFO: Pod "pod-service-account-fcee1641-8fbf-11ea-a618-0242ac110019-lm6pd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.512734ms May 6 17:35:45.689: INFO: Pod "pod-service-account-fcee1641-8fbf-11ea-a618-0242ac110019-lm6pd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008744787s May 6 17:35:47.692: INFO: Pod "pod-service-account-fcee1641-8fbf-11ea-a618-0242ac110019-lm6pd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.012281511s May 6 17:35:49.697: INFO: Pod "pod-service-account-fcee1641-8fbf-11ea-a618-0242ac110019-lm6pd": Phase="Pending", Reason="", readiness=false. Elapsed: 6.017629009s May 6 17:35:51.702: INFO: Pod "pod-service-account-fcee1641-8fbf-11ea-a618-0242ac110019-lm6pd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.021790811s STEP: Saw pod success May 6 17:35:51.702: INFO: Pod "pod-service-account-fcee1641-8fbf-11ea-a618-0242ac110019-lm6pd" satisfied condition "success or failure" May 6 17:35:51.704: INFO: Trying to get logs from node hunter-worker2 pod pod-service-account-fcee1641-8fbf-11ea-a618-0242ac110019-lm6pd container root-ca-test: STEP: delete the pod May 6 17:35:51.756: INFO: Waiting for pod pod-service-account-fcee1641-8fbf-11ea-a618-0242ac110019-lm6pd to disappear May 6 17:35:51.762: INFO: Pod pod-service-account-fcee1641-8fbf-11ea-a618-0242ac110019-lm6pd no longer exists STEP: Creating a pod to test consume service account namespace May 6 17:35:51.765: INFO: Waiting up to 5m0s for pod "pod-service-account-fcee1641-8fbf-11ea-a618-0242ac110019-wt2k6" in namespace "e2e-tests-svcaccounts-7d2j2" to be "success or failure" May 6 17:35:51.807: INFO: Pod "pod-service-account-fcee1641-8fbf-11ea-a618-0242ac110019-wt2k6": Phase="Pending", Reason="", readiness=false. Elapsed: 42.367759ms May 6 17:35:53.811: INFO: Pod "pod-service-account-fcee1641-8fbf-11ea-a618-0242ac110019-wt2k6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.045855164s May 6 17:35:55.815: INFO: Pod "pod-service-account-fcee1641-8fbf-11ea-a618-0242ac110019-wt2k6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.049830735s May 6 17:35:57.819: INFO: Pod "pod-service-account-fcee1641-8fbf-11ea-a618-0242ac110019-wt2k6": Phase="Running", Reason="", readiness=false. Elapsed: 6.054267023s May 6 17:35:59.823: INFO: Pod "pod-service-account-fcee1641-8fbf-11ea-a618-0242ac110019-wt2k6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.058020965s STEP: Saw pod success May 6 17:35:59.823: INFO: Pod "pod-service-account-fcee1641-8fbf-11ea-a618-0242ac110019-wt2k6" satisfied condition "success or failure" May 6 17:35:59.826: INFO: Trying to get logs from node hunter-worker pod pod-service-account-fcee1641-8fbf-11ea-a618-0242ac110019-wt2k6 container namespace-test: STEP: delete the pod May 6 17:35:59.949: INFO: Waiting for pod pod-service-account-fcee1641-8fbf-11ea-a618-0242ac110019-wt2k6 to disappear May 6 17:35:59.984: INFO: Pod pod-service-account-fcee1641-8fbf-11ea-a618-0242ac110019-wt2k6 no longer exists [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 17:35:59.984: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-svcaccounts-7d2j2" for this suite. May 6 17:36:08.000: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 17:36:08.019: INFO: namespace: e2e-tests-svcaccounts-7d2j2, resource: bindings, ignored listing per whitelist May 6 17:36:08.078: INFO: namespace e2e-tests-svcaccounts-7d2j2 deletion completed in 8.090497041s • [SLOW TEST:35.403 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:22 should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 17:36:08.079: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test override arguments May 6 17:36:08.243: INFO: Waiting up to 5m0s for pod "client-containers-11bb3414-8fc0-11ea-a618-0242ac110019" in namespace "e2e-tests-containers-8668l" to be "success or failure" May 6 17:36:08.311: INFO: Pod "client-containers-11bb3414-8fc0-11ea-a618-0242ac110019": Phase="Pending", Reason="", readiness=false. Elapsed: 67.535339ms May 6 17:36:10.313: INFO: Pod "client-containers-11bb3414-8fc0-11ea-a618-0242ac110019": Phase="Pending", Reason="", readiness=false. Elapsed: 2.07008442s May 6 17:36:12.360: INFO: Pod "client-containers-11bb3414-8fc0-11ea-a618-0242ac110019": Phase="Pending", Reason="", readiness=false. Elapsed: 4.116166944s May 6 17:36:14.364: INFO: Pod "client-containers-11bb3414-8fc0-11ea-a618-0242ac110019": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.120892822s STEP: Saw pod success May 6 17:36:14.364: INFO: Pod "client-containers-11bb3414-8fc0-11ea-a618-0242ac110019" satisfied condition "success or failure" May 6 17:36:14.368: INFO: Trying to get logs from node hunter-worker2 pod client-containers-11bb3414-8fc0-11ea-a618-0242ac110019 container test-container: STEP: delete the pod May 6 17:36:14.403: INFO: Waiting for pod client-containers-11bb3414-8fc0-11ea-a618-0242ac110019 to disappear May 6 17:36:14.415: INFO: Pod client-containers-11bb3414-8fc0-11ea-a618-0242ac110019 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 17:36:14.415: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-containers-8668l" for this suite. May 6 17:36:22.430: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 17:36:22.454: INFO: namespace: e2e-tests-containers-8668l, resource: bindings, ignored listing per whitelist May 6 17:36:22.512: INFO: namespace e2e-tests-containers-8668l deletion completed in 8.093950493s • [SLOW TEST:14.434 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 17:36:22.513: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-1aa48b27-8fc0-11ea-a618-0242ac110019 STEP: Creating a pod to test consume secrets May 6 17:36:23.261: INFO: Waiting up to 5m0s for pod "pod-secrets-1aa5629d-8fc0-11ea-a618-0242ac110019" in namespace "e2e-tests-secrets-xn2mj" to be "success or failure" May 6 17:36:23.299: INFO: Pod "pod-secrets-1aa5629d-8fc0-11ea-a618-0242ac110019": Phase="Pending", Reason="", readiness=false. Elapsed: 37.950824ms May 6 17:36:25.689: INFO: Pod "pod-secrets-1aa5629d-8fc0-11ea-a618-0242ac110019": Phase="Pending", Reason="", readiness=false. Elapsed: 2.427615573s May 6 17:36:27.693: INFO: Pod "pod-secrets-1aa5629d-8fc0-11ea-a618-0242ac110019": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.431909974s STEP: Saw pod success May 6 17:36:27.693: INFO: Pod "pod-secrets-1aa5629d-8fc0-11ea-a618-0242ac110019" satisfied condition "success or failure" May 6 17:36:27.697: INFO: Trying to get logs from node hunter-worker2 pod pod-secrets-1aa5629d-8fc0-11ea-a618-0242ac110019 container secret-volume-test: STEP: delete the pod May 6 17:36:27.722: INFO: Waiting for pod pod-secrets-1aa5629d-8fc0-11ea-a618-0242ac110019 to disappear May 6 17:36:27.844: INFO: Pod pod-secrets-1aa5629d-8fc0-11ea-a618-0242ac110019 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 17:36:27.844: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-xn2mj" for this suite. May 6 17:36:35.922: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 17:36:35.939: INFO: namespace: e2e-tests-secrets-xn2mj, resource: bindings, ignored listing per whitelist May 6 17:36:35.987: INFO: namespace e2e-tests-secrets-xn2mj deletion completed in 8.138456449s • [SLOW TEST:13.474 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 17:36:35.987: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:295 [It] should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a replication controller May 6 17:36:36.254: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-ffnb9' May 6 17:36:36.585: INFO: stderr: "" May 6 17:36:36.585: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. May 6 17:36:36.585: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-ffnb9' May 6 17:36:36.733: INFO: stderr: "" May 6 17:36:36.733: INFO: stdout: "update-demo-nautilus-9brvj update-demo-nautilus-snk24 " May 6 17:36:36.733: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9brvj -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-ffnb9' May 6 17:36:36.837: INFO: stderr: "" May 6 17:36:36.837: INFO: stdout: "" May 6 17:36:36.837: INFO: update-demo-nautilus-9brvj is created but not running May 6 17:36:41.837: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-ffnb9' May 6 17:36:42.144: INFO: stderr: "" May 6 17:36:42.144: INFO: stdout: "update-demo-nautilus-9brvj update-demo-nautilus-snk24 " May 6 17:36:42.144: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9brvj -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-ffnb9' May 6 17:36:42.243: INFO: stderr: "" May 6 17:36:42.243: INFO: stdout: "true" May 6 17:36:42.243: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9brvj -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-ffnb9' May 6 17:36:42.337: INFO: stderr: "" May 6 17:36:42.337: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 6 17:36:42.337: INFO: validating pod update-demo-nautilus-9brvj May 6 17:36:42.342: INFO: got data: { "image": "nautilus.jpg" } May 6 17:36:42.342: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 6 17:36:42.342: INFO: update-demo-nautilus-9brvj is verified up and running May 6 17:36:42.342: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-snk24 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-ffnb9' May 6 17:36:42.438: INFO: stderr: "" May 6 17:36:42.438: INFO: stdout: "true" May 6 17:36:42.438: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-snk24 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-ffnb9' May 6 17:36:42.539: INFO: stderr: "" May 6 17:36:42.539: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 6 17:36:42.539: INFO: validating pod update-demo-nautilus-snk24 May 6 17:36:42.544: INFO: got data: { "image": "nautilus.jpg" } May 6 17:36:42.544: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 6 17:36:42.544: INFO: update-demo-nautilus-snk24 is verified up and running STEP: scaling down the replication controller May 6 17:36:42.546: INFO: scanned /root for discovery docs: May 6 17:36:42.546: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=e2e-tests-kubectl-ffnb9' May 6 17:36:43.697: INFO: stderr: "" May 6 17:36:43.697: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. May 6 17:36:43.698: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-ffnb9' May 6 17:36:43.976: INFO: stderr: "" May 6 17:36:43.976: INFO: stdout: "update-demo-nautilus-9brvj update-demo-nautilus-snk24 " STEP: Replicas for name=update-demo: expected=1 actual=2 May 6 17:36:48.977: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-ffnb9' May 6 17:36:49.103: INFO: stderr: "" May 6 17:36:49.103: INFO: stdout: "update-demo-nautilus-9brvj update-demo-nautilus-snk24 " STEP: Replicas for name=update-demo: expected=1 actual=2 May 6 17:36:54.104: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-ffnb9' May 6 17:36:54.225: INFO: stderr: "" May 6 17:36:54.225: INFO: stdout: "update-demo-nautilus-9brvj " May 6 17:36:54.225: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9brvj -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-ffnb9' May 6 17:36:54.328: INFO: stderr: "" May 6 17:36:54.328: INFO: stdout: "true" May 6 17:36:54.328: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9brvj -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-ffnb9' May 6 17:36:54.443: INFO: stderr: "" May 6 17:36:54.443: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 6 17:36:54.443: INFO: validating pod update-demo-nautilus-9brvj May 6 17:36:54.446: INFO: got data: { "image": "nautilus.jpg" } May 6 17:36:54.446: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 6 17:36:54.446: INFO: update-demo-nautilus-9brvj is verified up and running STEP: scaling up the replication controller May 6 17:36:54.447: INFO: scanned /root for discovery docs: May 6 17:36:54.448: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=e2e-tests-kubectl-ffnb9' May 6 17:36:55.665: INFO: stderr: "" May 6 17:36:55.665: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. May 6 17:36:55.665: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-ffnb9' May 6 17:36:55.852: INFO: stderr: "" May 6 17:36:55.852: INFO: stdout: "update-demo-nautilus-9brvj update-demo-nautilus-9hv6n " May 6 17:36:55.852: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9brvj -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-ffnb9' May 6 17:36:57.604: INFO: stderr: "" May 6 17:36:57.604: INFO: stdout: "true" May 6 17:36:57.604: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9brvj -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-ffnb9' May 6 17:36:57.838: INFO: stderr: "" May 6 17:36:57.838: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 6 17:36:57.838: INFO: validating pod update-demo-nautilus-9brvj May 6 17:36:57.873: INFO: got data: { "image": "nautilus.jpg" } May 6 17:36:57.874: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 6 17:36:57.874: INFO: update-demo-nautilus-9brvj is verified up and running May 6 17:36:57.874: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9hv6n -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-ffnb9' May 6 17:36:58.284: INFO: stderr: "" May 6 17:36:58.284: INFO: stdout: "" May 6 17:36:58.284: INFO: update-demo-nautilus-9hv6n is created but not running May 6 17:37:03.284: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-ffnb9' May 6 17:37:03.879: INFO: stderr: "" May 6 17:37:03.879: INFO: stdout: "update-demo-nautilus-9brvj update-demo-nautilus-9hv6n " May 6 17:37:03.879: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9brvj -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-ffnb9' May 6 17:37:03.983: INFO: stderr: "" May 6 17:37:03.983: INFO: stdout: "true" May 6 17:37:03.983: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9brvj -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-ffnb9' May 6 17:37:04.148: INFO: stderr: "" May 6 17:37:04.148: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 6 17:37:04.148: INFO: validating pod update-demo-nautilus-9brvj May 6 17:37:04.183: INFO: got data: { "image": "nautilus.jpg" } May 6 17:37:04.183: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 6 17:37:04.183: INFO: update-demo-nautilus-9brvj is verified up and running May 6 17:37:04.183: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9hv6n -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-ffnb9' May 6 17:37:04.377: INFO: stderr: "" May 6 17:37:04.377: INFO: stdout: "true" May 6 17:37:04.377: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9hv6n -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-ffnb9' May 6 17:37:04.557: INFO: stderr: "" May 6 17:37:04.557: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 6 17:37:04.558: INFO: validating pod update-demo-nautilus-9hv6n May 6 17:37:04.562: INFO: got data: { "image": "nautilus.jpg" } May 6 17:37:04.562: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 6 17:37:04.562: INFO: update-demo-nautilus-9hv6n is verified up and running STEP: using delete to clean up resources May 6 17:37:04.562: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-ffnb9' May 6 17:37:04.672: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 6 17:37:04.672: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" May 6 17:37:04.672: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=e2e-tests-kubectl-ffnb9' May 6 17:37:04.869: INFO: stderr: "No resources found.\n" May 6 17:37:04.869: INFO: stdout: "" May 6 17:37:04.869: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=e2e-tests-kubectl-ffnb9 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 6 17:37:04.976: INFO: stderr: "" May 6 17:37:04.976: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 17:37:04.976: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-ffnb9" for this suite. May 6 17:37:13.014: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 17:37:13.038: INFO: namespace: e2e-tests-kubectl-ffnb9, resource: bindings, ignored listing per whitelist May 6 17:37:13.089: INFO: namespace e2e-tests-kubectl-ffnb9 deletion completed in 8.107006425s • [SLOW TEST:37.102 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 17:37:13.089: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-388d67cb-8fc0-11ea-a618-0242ac110019 STEP: Creating a pod to test consume configMaps May 6 17:37:13.377: INFO: Waiting up to 5m0s for pod "pod-configmaps-388ec6a7-8fc0-11ea-a618-0242ac110019" in namespace "e2e-tests-configmap-m4crj" to be "success or failure" May 6 17:37:13.411: INFO: Pod "pod-configmaps-388ec6a7-8fc0-11ea-a618-0242ac110019": Phase="Pending", Reason="", readiness=false. Elapsed: 33.700959ms May 6 17:37:15.468: INFO: Pod "pod-configmaps-388ec6a7-8fc0-11ea-a618-0242ac110019": Phase="Pending", Reason="", readiness=false. Elapsed: 2.090801095s May 6 17:37:17.486: INFO: Pod "pod-configmaps-388ec6a7-8fc0-11ea-a618-0242ac110019": Phase="Pending", Reason="", readiness=false. Elapsed: 4.108233003s May 6 17:37:19.780: INFO: Pod "pod-configmaps-388ec6a7-8fc0-11ea-a618-0242ac110019": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.402686663s STEP: Saw pod success May 6 17:37:19.780: INFO: Pod "pod-configmaps-388ec6a7-8fc0-11ea-a618-0242ac110019" satisfied condition "success or failure" May 6 17:37:19.784: INFO: Trying to get logs from node hunter-worker pod pod-configmaps-388ec6a7-8fc0-11ea-a618-0242ac110019 container configmap-volume-test: STEP: delete the pod May 6 17:37:19.985: INFO: Waiting for pod pod-configmaps-388ec6a7-8fc0-11ea-a618-0242ac110019 to disappear May 6 17:37:19.991: INFO: Pod pod-configmaps-388ec6a7-8fc0-11ea-a618-0242ac110019 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 17:37:19.991: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-m4crj" for this suite. May 6 17:37:28.007: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 17:37:28.048: INFO: namespace: e2e-tests-configmap-m4crj, resource: bindings, ignored listing per whitelist May 6 17:37:28.232: INFO: namespace e2e-tests-configmap-m4crj deletion completed in 8.237858031s • [SLOW TEST:15.143 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 17:37:28.233: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0644 on node default medium May 6 17:37:29.068: INFO: Waiting up to 5m0s for pod "pod-41dd2e32-8fc0-11ea-a618-0242ac110019" in namespace "e2e-tests-emptydir-hs2hx" to be "success or failure" May 6 17:37:29.446: INFO: Pod "pod-41dd2e32-8fc0-11ea-a618-0242ac110019": Phase="Pending", Reason="", readiness=false. Elapsed: 378.316964ms May 6 17:37:31.530: INFO: Pod "pod-41dd2e32-8fc0-11ea-a618-0242ac110019": Phase="Pending", Reason="", readiness=false. Elapsed: 2.462069577s May 6 17:37:33.678: INFO: Pod "pod-41dd2e32-8fc0-11ea-a618-0242ac110019": Phase="Pending", Reason="", readiness=false. Elapsed: 4.610395447s May 6 17:37:35.681: INFO: Pod "pod-41dd2e32-8fc0-11ea-a618-0242ac110019": Phase="Pending", Reason="", readiness=false. Elapsed: 6.613586549s May 6 17:37:37.773: INFO: Pod "pod-41dd2e32-8fc0-11ea-a618-0242ac110019": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.705591942s STEP: Saw pod success May 6 17:37:37.773: INFO: Pod "pod-41dd2e32-8fc0-11ea-a618-0242ac110019" satisfied condition "success or failure" May 6 17:37:37.776: INFO: Trying to get logs from node hunter-worker pod pod-41dd2e32-8fc0-11ea-a618-0242ac110019 container test-container: STEP: delete the pod May 6 17:37:37.860: INFO: Waiting for pod pod-41dd2e32-8fc0-11ea-a618-0242ac110019 to disappear May 6 17:37:37.867: INFO: Pod pod-41dd2e32-8fc0-11ea-a618-0242ac110019 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 17:37:37.867: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-hs2hx" for this suite. May 6 17:37:50.344: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 17:37:52.095: INFO: namespace: e2e-tests-emptydir-hs2hx, resource: bindings, ignored listing per whitelist May 6 17:37:52.129: INFO: namespace e2e-tests-emptydir-hs2hx deletion completed in 14.258140546s • [SLOW TEST:23.896 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 17:37:52.129: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79 May 6 17:37:53.818: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 6 17:37:54.047: INFO: Waiting for terminating namespaces to be deleted... May 6 17:37:54.274: INFO: Logging pods the kubelet thinks is on node hunter-worker before test May 6 17:37:54.292: INFO: kube-proxy-szbng from kube-system started at 2020-03-15 18:23:11 +0000 UTC (1 container statuses recorded) May 6 17:37:54.292: INFO: Container kube-proxy ready: true, restart count 0 May 6 17:37:54.292: INFO: kindnet-54h7m from kube-system started at 2020-03-15 18:23:12 +0000 UTC (1 container statuses recorded) May 6 17:37:54.292: INFO: Container kindnet-cni ready: true, restart count 0 May 6 17:37:54.292: INFO: coredns-54ff9cd656-4h7lb from kube-system started at 2020-03-15 18:23:32 +0000 UTC (1 container statuses recorded) May 6 17:37:54.292: INFO: Container coredns ready: true, restart count 0 May 6 17:37:54.292: INFO: Logging pods the kubelet thinks is on node hunter-worker2 before test May 6 17:37:54.297: INFO: kindnet-mtqrs from kube-system started at 2020-03-15 18:23:12 +0000 UTC (1 container statuses recorded) May 6 17:37:54.297: INFO: Container kindnet-cni ready: true, restart count 0 May 6 17:37:54.297: INFO: coredns-54ff9cd656-8vrkk from kube-system started at 2020-03-15 18:23:32 +0000 UTC (1 container statuses recorded) May 6 17:37:54.297: INFO: Container coredns ready: true, restart count 0 May 6 17:37:54.297: INFO: kube-proxy-s52ll from kube-system started at 2020-03-15 18:23:12 +0000 UTC (1 container statuses recorded) May 6 17:37:54.297: INFO: Container kube-proxy ready: true, restart count 0 [It] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: verifying the node has the label node hunter-worker STEP: verifying the node has the label node hunter-worker2 May 6 17:37:54.917: INFO: Pod coredns-54ff9cd656-4h7lb requesting resource cpu=100m on Node hunter-worker May 6 17:37:54.917: INFO: Pod coredns-54ff9cd656-8vrkk requesting resource cpu=100m on Node hunter-worker2 May 6 17:37:54.917: INFO: Pod kindnet-54h7m requesting resource cpu=100m on Node hunter-worker May 6 17:37:54.917: INFO: Pod kindnet-mtqrs requesting resource cpu=100m on Node hunter-worker2 May 6 17:37:54.917: INFO: Pod kube-proxy-s52ll requesting resource cpu=0m on Node hunter-worker2 May 6 17:37:54.917: INFO: Pod kube-proxy-szbng requesting resource cpu=0m on Node hunter-worker STEP: Starting Pods to consume most of the cluster CPU. STEP: Creating another pod that requires unavailable amount of CPU. STEP: Considering event: Type = [Normal], Name = [filler-pod-51540d4a-8fc0-11ea-a618-0242ac110019.160c811030078618], Reason = [Scheduled], Message = [Successfully assigned e2e-tests-sched-pred-r6jk9/filler-pod-51540d4a-8fc0-11ea-a618-0242ac110019 to hunter-worker] STEP: Considering event: Type = [Normal], Name = [filler-pod-51540d4a-8fc0-11ea-a618-0242ac110019.160c8110df9e05ed], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-51540d4a-8fc0-11ea-a618-0242ac110019.160c811137a9d917], Reason = [Created], Message = [Created container] STEP: Considering event: Type = [Normal], Name = [filler-pod-51540d4a-8fc0-11ea-a618-0242ac110019.160c811148869082], Reason = [Started], Message = [Started container] STEP: Considering event: Type = [Normal], Name = [filler-pod-5154c2e8-8fc0-11ea-a618-0242ac110019.160c8110319d5130], Reason = [Scheduled], Message = [Successfully assigned e2e-tests-sched-pred-r6jk9/filler-pod-5154c2e8-8fc0-11ea-a618-0242ac110019 to hunter-worker2] STEP: Considering event: Type = [Normal], Name = [filler-pod-5154c2e8-8fc0-11ea-a618-0242ac110019.160c8110cd394ba2], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-5154c2e8-8fc0-11ea-a618-0242ac110019.160c811130d29989], Reason = [Created], Message = [Created container] STEP: Considering event: Type = [Normal], Name = [filler-pod-5154c2e8-8fc0-11ea-a618-0242ac110019.160c811141aa76aa], Reason = [Started], Message = [Started container] STEP: Considering event: Type = [Warning], Name = [additional-pod.160c811196be45d3], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taints that the pod didn't tolerate, 2 Insufficient cpu.] STEP: removing the label node off the node hunter-worker STEP: verifying the node doesn't have the label node STEP: removing the label node off the node hunter-worker2 STEP: verifying the node doesn't have the label node [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 17:38:02.318: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-sched-pred-r6jk9" for this suite. May 6 17:38:08.432: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 17:38:08.502: INFO: namespace: e2e-tests-sched-pred-r6jk9, resource: bindings, ignored listing per whitelist May 6 17:38:08.502: INFO: namespace e2e-tests-sched-pred-r6jk9 deletion completed in 6.166984979s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70 • [SLOW TEST:16.373 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22 validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 17:38:08.503: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:204 [It] should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying QOS class is set on the pod [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 17:38:08.947: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-sbf2g" for this suite. May 6 17:38:31.009: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 17:38:31.020: INFO: namespace: e2e-tests-pods-sbf2g, resource: bindings, ignored listing per whitelist May 6 17:38:31.079: INFO: namespace e2e-tests-pods-sbf2g deletion completed in 22.129058258s • [SLOW TEST:22.577 seconds] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 17:38:31.080: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0666 on node default medium May 6 17:38:31.832: INFO: Waiting up to 5m0s for pod "pod-673b7f6a-8fc0-11ea-a618-0242ac110019" in namespace "e2e-tests-emptydir-d4rtx" to be "success or failure" May 6 17:38:32.098: INFO: Pod "pod-673b7f6a-8fc0-11ea-a618-0242ac110019": Phase="Pending", Reason="", readiness=false. Elapsed: 266.051996ms May 6 17:38:34.101: INFO: Pod "pod-673b7f6a-8fc0-11ea-a618-0242ac110019": Phase="Pending", Reason="", readiness=false. Elapsed: 2.269354348s May 6 17:38:36.107: INFO: Pod "pod-673b7f6a-8fc0-11ea-a618-0242ac110019": Phase="Pending", Reason="", readiness=false. Elapsed: 4.274515937s May 6 17:38:38.158: INFO: Pod "pod-673b7f6a-8fc0-11ea-a618-0242ac110019": Phase="Running", Reason="", readiness=true. Elapsed: 6.325509105s May 6 17:38:40.161: INFO: Pod "pod-673b7f6a-8fc0-11ea-a618-0242ac110019": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.328741955s STEP: Saw pod success May 6 17:38:40.161: INFO: Pod "pod-673b7f6a-8fc0-11ea-a618-0242ac110019" satisfied condition "success or failure" May 6 17:38:40.163: INFO: Trying to get logs from node hunter-worker2 pod pod-673b7f6a-8fc0-11ea-a618-0242ac110019 container test-container: STEP: delete the pod May 6 17:38:40.332: INFO: Waiting for pod pod-673b7f6a-8fc0-11ea-a618-0242ac110019 to disappear May 6 17:38:40.499: INFO: Pod pod-673b7f6a-8fc0-11ea-a618-0242ac110019 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 17:38:40.499: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-d4rtx" for this suite. May 6 17:38:48.561: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 17:38:48.575: INFO: namespace: e2e-tests-emptydir-d4rtx, resource: bindings, ignored listing per whitelist May 6 17:38:48.636: INFO: namespace e2e-tests-emptydir-d4rtx deletion completed in 8.133230339s • [SLOW TEST:17.556 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 17:38:48.636: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin May 6 17:38:49.545: INFO: Waiting up to 5m0s for pod "downwardapi-volume-71b768a0-8fc0-11ea-a618-0242ac110019" in namespace "e2e-tests-projected-xvzj2" to be "success or failure" May 6 17:38:49.668: INFO: Pod "downwardapi-volume-71b768a0-8fc0-11ea-a618-0242ac110019": Phase="Pending", Reason="", readiness=false. Elapsed: 122.565749ms May 6 17:38:52.284: INFO: Pod "downwardapi-volume-71b768a0-8fc0-11ea-a618-0242ac110019": Phase="Pending", Reason="", readiness=false. Elapsed: 2.739033786s May 6 17:38:54.494: INFO: Pod "downwardapi-volume-71b768a0-8fc0-11ea-a618-0242ac110019": Phase="Pending", Reason="", readiness=false. Elapsed: 4.948902802s May 6 17:38:56.498: INFO: Pod "downwardapi-volume-71b768a0-8fc0-11ea-a618-0242ac110019": Phase="Running", Reason="", readiness=true. Elapsed: 6.952798794s May 6 17:38:58.502: INFO: Pod "downwardapi-volume-71b768a0-8fc0-11ea-a618-0242ac110019": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.95720189s STEP: Saw pod success May 6 17:38:58.502: INFO: Pod "downwardapi-volume-71b768a0-8fc0-11ea-a618-0242ac110019" satisfied condition "success or failure" May 6 17:38:58.505: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-71b768a0-8fc0-11ea-a618-0242ac110019 container client-container: STEP: delete the pod May 6 17:38:58.914: INFO: Waiting for pod downwardapi-volume-71b768a0-8fc0-11ea-a618-0242ac110019 to disappear May 6 17:38:58.922: INFO: Pod downwardapi-volume-71b768a0-8fc0-11ea-a618-0242ac110019 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 17:38:58.922: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-xvzj2" for this suite. May 6 17:39:05.150: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 17:39:05.182: INFO: namespace: e2e-tests-projected-xvzj2, resource: bindings, ignored listing per whitelist May 6 17:39:05.225: INFO: namespace e2e-tests-projected-xvzj2 deletion completed in 6.301067853s • [SLOW TEST:16.589 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl api-versions should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 17:39:05.226: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: validating api versions May 6 17:39:05.597: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config api-versions' May 6 17:39:05.835: INFO: stderr: "" May 6 17:39:05.835: INFO: stdout: "admissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\napps/v1beta1\napps/v1beta2\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 17:39:05.835: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-bqn7z" for this suite. May 6 17:39:11.880: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 17:39:11.921: INFO: namespace: e2e-tests-kubectl-bqn7z, resource: bindings, ignored listing per whitelist May 6 17:39:11.960: INFO: namespace e2e-tests-kubectl-bqn7z deletion completed in 6.120563536s • [SLOW TEST:6.735 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl api-versions /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 17:39:11.961: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a watch on configmaps with label A STEP: creating a watch on configmaps with label B STEP: creating a watch on configmaps with label A or B STEP: creating a configmap with label A and ensuring the correct watchers observe the notification May 6 17:39:12.095: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-nffcx,SelfLink:/api/v1/namespaces/e2e-tests-watch-nffcx/configmaps/e2e-watch-test-configmap-a,UID:7f5413e5-8fc0-11ea-99e8-0242ac110002,ResourceVersion:9086308,Generation:0,CreationTimestamp:2020-05-06 17:39:12 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} May 6 17:39:12.095: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-nffcx,SelfLink:/api/v1/namespaces/e2e-tests-watch-nffcx/configmaps/e2e-watch-test-configmap-a,UID:7f5413e5-8fc0-11ea-99e8-0242ac110002,ResourceVersion:9086308,Generation:0,CreationTimestamp:2020-05-06 17:39:12 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: modifying configmap A and ensuring the correct watchers observe the notification May 6 17:39:22.102: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-nffcx,SelfLink:/api/v1/namespaces/e2e-tests-watch-nffcx/configmaps/e2e-watch-test-configmap-a,UID:7f5413e5-8fc0-11ea-99e8-0242ac110002,ResourceVersion:9086328,Generation:0,CreationTimestamp:2020-05-06 17:39:12 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} May 6 17:39:22.103: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-nffcx,SelfLink:/api/v1/namespaces/e2e-tests-watch-nffcx/configmaps/e2e-watch-test-configmap-a,UID:7f5413e5-8fc0-11ea-99e8-0242ac110002,ResourceVersion:9086328,Generation:0,CreationTimestamp:2020-05-06 17:39:12 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying configmap A again and ensuring the correct watchers observe the notification May 6 17:39:32.110: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-nffcx,SelfLink:/api/v1/namespaces/e2e-tests-watch-nffcx/configmaps/e2e-watch-test-configmap-a,UID:7f5413e5-8fc0-11ea-99e8-0242ac110002,ResourceVersion:9086347,Generation:0,CreationTimestamp:2020-05-06 17:39:12 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} May 6 17:39:32.110: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-nffcx,SelfLink:/api/v1/namespaces/e2e-tests-watch-nffcx/configmaps/e2e-watch-test-configmap-a,UID:7f5413e5-8fc0-11ea-99e8-0242ac110002,ResourceVersion:9086347,Generation:0,CreationTimestamp:2020-05-06 17:39:12 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: deleting configmap A and ensuring the correct watchers observe the notification May 6 17:39:42.116: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-nffcx,SelfLink:/api/v1/namespaces/e2e-tests-watch-nffcx/configmaps/e2e-watch-test-configmap-a,UID:7f5413e5-8fc0-11ea-99e8-0242ac110002,ResourceVersion:9086367,Generation:0,CreationTimestamp:2020-05-06 17:39:12 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} May 6 17:39:42.116: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-nffcx,SelfLink:/api/v1/namespaces/e2e-tests-watch-nffcx/configmaps/e2e-watch-test-configmap-a,UID:7f5413e5-8fc0-11ea-99e8-0242ac110002,ResourceVersion:9086367,Generation:0,CreationTimestamp:2020-05-06 17:39:12 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: creating a configmap with label B and ensuring the correct watchers observe the notification May 6 17:39:52.149: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-nffcx,SelfLink:/api/v1/namespaces/e2e-tests-watch-nffcx/configmaps/e2e-watch-test-configmap-b,UID:972f770a-8fc0-11ea-99e8-0242ac110002,ResourceVersion:9086387,Generation:0,CreationTimestamp:2020-05-06 17:39:52 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} May 6 17:39:52.149: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-nffcx,SelfLink:/api/v1/namespaces/e2e-tests-watch-nffcx/configmaps/e2e-watch-test-configmap-b,UID:972f770a-8fc0-11ea-99e8-0242ac110002,ResourceVersion:9086387,Generation:0,CreationTimestamp:2020-05-06 17:39:52 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: deleting configmap B and ensuring the correct watchers observe the notification May 6 17:40:02.155: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-nffcx,SelfLink:/api/v1/namespaces/e2e-tests-watch-nffcx/configmaps/e2e-watch-test-configmap-b,UID:972f770a-8fc0-11ea-99e8-0242ac110002,ResourceVersion:9086406,Generation:0,CreationTimestamp:2020-05-06 17:39:52 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} May 6 17:40:02.155: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-nffcx,SelfLink:/api/v1/namespaces/e2e-tests-watch-nffcx/configmaps/e2e-watch-test-configmap-b,UID:972f770a-8fc0-11ea-99e8-0242ac110002,ResourceVersion:9086406,Generation:0,CreationTimestamp:2020-05-06 17:39:52 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 17:40:12.155: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-watch-nffcx" for this suite. May 6 17:40:18.174: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 17:40:18.186: INFO: namespace: e2e-tests-watch-nffcx, resource: bindings, ignored listing per whitelist May 6 17:40:18.248: INFO: namespace e2e-tests-watch-nffcx deletion completed in 6.088076154s • [SLOW TEST:66.287 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 17:40:18.248: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name cm-test-opt-del-a6dce605-8fc0-11ea-a618-0242ac110019 STEP: Creating configMap with name cm-test-opt-upd-a6dce654-8fc0-11ea-a618-0242ac110019 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-a6dce605-8fc0-11ea-a618-0242ac110019 STEP: Updating configmap cm-test-opt-upd-a6dce654-8fc0-11ea-a618-0242ac110019 STEP: Creating configMap with name cm-test-opt-create-a6dce66e-8fc0-11ea-a618-0242ac110019 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 17:40:28.816: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-thplm" for this suite. May 6 17:40:55.064: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 17:40:55.389: INFO: namespace: e2e-tests-configmap-thplm, resource: bindings, ignored listing per whitelist May 6 17:40:55.395: INFO: namespace e2e-tests-configmap-thplm deletion completed in 26.574114147s • [SLOW TEST:37.147 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run deployment should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 17:40:55.395: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1399 [It] should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine May 6 17:40:56.845: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --generator=deployment/v1beta1 --namespace=e2e-tests-kubectl-z5qhk' May 6 17:41:09.019: INFO: stderr: "kubectl run --generator=deployment/v1beta1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" May 6 17:41:09.019: INFO: stdout: "deployment.extensions/e2e-test-nginx-deployment created\n" STEP: verifying the deployment e2e-test-nginx-deployment was created STEP: verifying the pod controlled by deployment e2e-test-nginx-deployment was created [AfterEach] [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1404 May 6 17:41:13.045: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=e2e-tests-kubectl-z5qhk' May 6 17:41:13.183: INFO: stderr: "" May 6 17:41:13.183: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 17:41:13.183: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-z5qhk" for this suite. May 6 17:43:17.408: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 17:43:17.440: INFO: namespace: e2e-tests-kubectl-z5qhk, resource: bindings, ignored listing per whitelist May 6 17:43:17.474: INFO: namespace e2e-tests-kubectl-z5qhk deletion completed in 2m4.286627142s • [SLOW TEST:142.078 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 17:43:17.474: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a pod in the namespace STEP: Waiting for the pod to have running status STEP: Creating an uninitialized pod in the namespace May 6 17:43:22.037: INFO: error from create uninitialized namespace: STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there are no pods in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 17:43:46.138: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-namespaces-2bcz7" for this suite. May 6 17:43:52.162: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 17:43:52.188: INFO: namespace: e2e-tests-namespaces-2bcz7, resource: bindings, ignored listing per whitelist May 6 17:43:52.240: INFO: namespace e2e-tests-namespaces-2bcz7 deletion completed in 6.097807459s STEP: Destroying namespace "e2e-tests-nsdeletetest-s2jgl" for this suite. May 6 17:43:52.243: INFO: Namespace e2e-tests-nsdeletetest-s2jgl was already deleted STEP: Destroying namespace "e2e-tests-nsdeletetest-wc8j6" for this suite. May 6 17:43:58.416: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 17:43:58.480: INFO: namespace: e2e-tests-nsdeletetest-wc8j6, resource: bindings, ignored listing per whitelist May 6 17:43:58.496: INFO: namespace e2e-tests-nsdeletetest-wc8j6 deletion completed in 6.253103889s • [SLOW TEST:41.022 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 17:43:58.496: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name s-test-opt-del-2a34d374-8fc1-11ea-a618-0242ac110019 STEP: Creating secret with name s-test-opt-upd-2a34d403-8fc1-11ea-a618-0242ac110019 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-2a34d374-8fc1-11ea-a618-0242ac110019 STEP: Updating secret s-test-opt-upd-2a34d403-8fc1-11ea-a618-0242ac110019 STEP: Creating secret with name s-test-opt-create-2a34d42c-8fc1-11ea-a618-0242ac110019 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 17:45:18.408: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-z79sq" for this suite. May 6 17:45:48.551: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 17:45:48.586: INFO: namespace: e2e-tests-secrets-z79sq, resource: bindings, ignored listing per whitelist May 6 17:45:48.701: INFO: namespace e2e-tests-secrets-z79sq deletion completed in 30.28904607s • [SLOW TEST:110.205 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 17:45:48.701: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test substitution in container's command May 6 17:45:48.975: INFO: Waiting up to 5m0s for pod "var-expansion-6bd46d92-8fc1-11ea-a618-0242ac110019" in namespace "e2e-tests-var-expansion-xppdg" to be "success or failure" May 6 17:45:49.171: INFO: Pod "var-expansion-6bd46d92-8fc1-11ea-a618-0242ac110019": Phase="Pending", Reason="", readiness=false. Elapsed: 195.787537ms May 6 17:45:51.176: INFO: Pod "var-expansion-6bd46d92-8fc1-11ea-a618-0242ac110019": Phase="Pending", Reason="", readiness=false. Elapsed: 2.200218348s May 6 17:45:53.209: INFO: Pod "var-expansion-6bd46d92-8fc1-11ea-a618-0242ac110019": Phase="Pending", Reason="", readiness=false. Elapsed: 4.233501454s May 6 17:45:55.212: INFO: Pod "var-expansion-6bd46d92-8fc1-11ea-a618-0242ac110019": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.237078386s STEP: Saw pod success May 6 17:45:55.212: INFO: Pod "var-expansion-6bd46d92-8fc1-11ea-a618-0242ac110019" satisfied condition "success or failure" May 6 17:45:55.215: INFO: Trying to get logs from node hunter-worker pod var-expansion-6bd46d92-8fc1-11ea-a618-0242ac110019 container dapi-container: STEP: delete the pod May 6 17:45:55.485: INFO: Waiting for pod var-expansion-6bd46d92-8fc1-11ea-a618-0242ac110019 to disappear May 6 17:45:55.796: INFO: Pod var-expansion-6bd46d92-8fc1-11ea-a618-0242ac110019 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 17:45:55.796: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-var-expansion-xppdg" for this suite. May 6 17:46:04.007: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 17:46:04.067: INFO: namespace: e2e-tests-var-expansion-xppdg, resource: bindings, ignored listing per whitelist May 6 17:46:04.096: INFO: namespace e2e-tests-var-expansion-xppdg deletion completed in 8.296595973s • [SLOW TEST:15.395 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 17:46:04.097: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 17:46:12.503: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubelet-test-k9xkz" for this suite. May 6 17:47:02.860: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 17:47:03.116: INFO: namespace: e2e-tests-kubelet-test-k9xkz, resource: bindings, ignored listing per whitelist May 6 17:47:03.126: INFO: namespace e2e-tests-kubelet-test-k9xkz deletion completed in 50.61938314s • [SLOW TEST:59.030 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when scheduling a busybox command in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:40 should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 17:47:03.126: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-983ef002-8fc1-11ea-a618-0242ac110019 STEP: Creating a pod to test consume configMaps May 6 17:47:03.580: INFO: Waiting up to 5m0s for pod "pod-configmaps-985788d9-8fc1-11ea-a618-0242ac110019" in namespace "e2e-tests-configmap-rsb9z" to be "success or failure" May 6 17:47:03.638: INFO: Pod "pod-configmaps-985788d9-8fc1-11ea-a618-0242ac110019": Phase="Pending", Reason="", readiness=false. Elapsed: 57.778394ms May 6 17:47:05.642: INFO: Pod "pod-configmaps-985788d9-8fc1-11ea-a618-0242ac110019": Phase="Pending", Reason="", readiness=false. Elapsed: 2.061882889s May 6 17:47:07.865: INFO: Pod "pod-configmaps-985788d9-8fc1-11ea-a618-0242ac110019": Phase="Pending", Reason="", readiness=false. Elapsed: 4.285233285s May 6 17:47:09.868: INFO: Pod "pod-configmaps-985788d9-8fc1-11ea-a618-0242ac110019": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.288062295s STEP: Saw pod success May 6 17:47:09.868: INFO: Pod "pod-configmaps-985788d9-8fc1-11ea-a618-0242ac110019" satisfied condition "success or failure" May 6 17:47:09.870: INFO: Trying to get logs from node hunter-worker pod pod-configmaps-985788d9-8fc1-11ea-a618-0242ac110019 container configmap-volume-test: STEP: delete the pod May 6 17:47:09.900: INFO: Waiting for pod pod-configmaps-985788d9-8fc1-11ea-a618-0242ac110019 to disappear May 6 17:47:09.920: INFO: Pod pod-configmaps-985788d9-8fc1-11ea-a618-0242ac110019 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 17:47:09.920: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-rsb9z" for this suite. May 6 17:47:22.580: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 17:47:22.826: INFO: namespace: e2e-tests-configmap-rsb9z, resource: bindings, ignored listing per whitelist May 6 17:47:23.288: INFO: namespace e2e-tests-configmap-rsb9z deletion completed in 13.36471353s • [SLOW TEST:20.161 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 17:47:23.288: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod liveness-exec in namespace e2e-tests-container-probe-99f6m May 6 17:47:32.503: INFO: Started pod liveness-exec in namespace e2e-tests-container-probe-99f6m STEP: checking the pod's current state and verifying that restartCount is present May 6 17:47:32.507: INFO: Initial restart count of pod liveness-exec is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 17:51:34.841: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-99f6m" for this suite. May 6 17:51:42.873: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 17:51:42.907: INFO: namespace: e2e-tests-container-probe-99f6m, resource: bindings, ignored listing per whitelist May 6 17:51:42.949: INFO: namespace e2e-tests-container-probe-99f6m deletion completed in 8.104206136s • [SLOW TEST:259.661 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 17:51:42.949: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test env composition May 6 17:51:43.212: INFO: Waiting up to 5m0s for pod "var-expansion-3f06590c-8fc2-11ea-a618-0242ac110019" in namespace "e2e-tests-var-expansion-mhlgh" to be "success or failure" May 6 17:51:43.332: INFO: Pod "var-expansion-3f06590c-8fc2-11ea-a618-0242ac110019": Phase="Pending", Reason="", readiness=false. Elapsed: 119.806189ms May 6 17:51:45.335: INFO: Pod "var-expansion-3f06590c-8fc2-11ea-a618-0242ac110019": Phase="Pending", Reason="", readiness=false. Elapsed: 2.122924984s May 6 17:51:47.338: INFO: Pod "var-expansion-3f06590c-8fc2-11ea-a618-0242ac110019": Phase="Pending", Reason="", readiness=false. Elapsed: 4.126410427s May 6 17:51:49.342: INFO: Pod "var-expansion-3f06590c-8fc2-11ea-a618-0242ac110019": Phase="Pending", Reason="", readiness=false. Elapsed: 6.130173989s May 6 17:51:51.347: INFO: Pod "var-expansion-3f06590c-8fc2-11ea-a618-0242ac110019": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.134990617s STEP: Saw pod success May 6 17:51:51.347: INFO: Pod "var-expansion-3f06590c-8fc2-11ea-a618-0242ac110019" satisfied condition "success or failure" May 6 17:51:51.350: INFO: Trying to get logs from node hunter-worker2 pod var-expansion-3f06590c-8fc2-11ea-a618-0242ac110019 container dapi-container: STEP: delete the pod May 6 17:51:51.374: INFO: Waiting for pod var-expansion-3f06590c-8fc2-11ea-a618-0242ac110019 to disappear May 6 17:51:51.415: INFO: Pod var-expansion-3f06590c-8fc2-11ea-a618-0242ac110019 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 17:51:51.415: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-var-expansion-mhlgh" for this suite. May 6 17:51:57.491: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 17:51:57.540: INFO: namespace: e2e-tests-var-expansion-mhlgh, resource: bindings, ignored listing per whitelist May 6 17:51:57.548: INFO: namespace e2e-tests-var-expansion-mhlgh deletion completed in 6.129732205s • [SLOW TEST:14.599 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 17:51:57.548: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating replication controller my-hostname-basic-47f4bb80-8fc2-11ea-a618-0242ac110019 May 6 17:51:58.953: INFO: Pod name my-hostname-basic-47f4bb80-8fc2-11ea-a618-0242ac110019: Found 0 pods out of 1 May 6 17:52:04.081: INFO: Pod name my-hostname-basic-47f4bb80-8fc2-11ea-a618-0242ac110019: Found 1 pods out of 1 May 6 17:52:04.081: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-47f4bb80-8fc2-11ea-a618-0242ac110019" are running May 6 17:52:04.084: INFO: Pod "my-hostname-basic-47f4bb80-8fc2-11ea-a618-0242ac110019-9l6cx" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-06 17:51:59 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-06 17:52:03 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-06 17:52:03 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-06 17:51:58 +0000 UTC Reason: Message:}]) May 6 17:52:04.084: INFO: Trying to dial the pod May 6 17:52:09.097: INFO: Controller my-hostname-basic-47f4bb80-8fc2-11ea-a618-0242ac110019: Got expected result from replica 1 [my-hostname-basic-47f4bb80-8fc2-11ea-a618-0242ac110019-9l6cx]: "my-hostname-basic-47f4bb80-8fc2-11ea-a618-0242ac110019-9l6cx", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 17:52:09.097: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-replication-controller-pj9x9" for this suite. May 6 17:52:17.119: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 17:52:17.131: INFO: namespace: e2e-tests-replication-controller-pj9x9, resource: bindings, ignored listing per whitelist May 6 17:52:17.267: INFO: namespace e2e-tests-replication-controller-pj9x9 deletion completed in 8.166181604s • [SLOW TEST:19.719 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 17:52:17.267: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-cd68f STEP: creating a selector STEP: Creating the service pods in kubernetes May 6 17:52:17.663: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods May 6 17:52:46.183: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.2.27:8080/hostName | grep -v '^\s*$'] Namespace:e2e-tests-pod-network-test-cd68f PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 6 17:52:46.183: INFO: >>> kubeConfig: /root/.kube/config I0506 17:52:46.212772 6 log.go:172] (0xc0007c5ad0) (0xc00236a640) Create stream I0506 17:52:46.212804 6 log.go:172] (0xc0007c5ad0) (0xc00236a640) Stream added, broadcasting: 1 I0506 17:52:46.215190 6 log.go:172] (0xc0007c5ad0) Reply frame received for 1 I0506 17:52:46.215218 6 log.go:172] (0xc0007c5ad0) (0xc001dd40a0) Create stream I0506 17:52:46.215226 6 log.go:172] (0xc0007c5ad0) (0xc001dd40a0) Stream added, broadcasting: 3 I0506 17:52:46.216099 6 log.go:172] (0xc0007c5ad0) Reply frame received for 3 I0506 17:52:46.216159 6 log.go:172] (0xc0007c5ad0) (0xc001dd4140) Create stream I0506 17:52:46.216178 6 log.go:172] (0xc0007c5ad0) (0xc001dd4140) Stream added, broadcasting: 5 I0506 17:52:46.216965 6 log.go:172] (0xc0007c5ad0) Reply frame received for 5 I0506 17:52:46.289324 6 log.go:172] (0xc0007c5ad0) Data frame received for 5 I0506 17:52:46.289379 6 log.go:172] (0xc001dd4140) (5) Data frame handling I0506 17:52:46.289411 6 log.go:172] (0xc0007c5ad0) Data frame received for 3 I0506 17:52:46.289430 6 log.go:172] (0xc001dd40a0) (3) Data frame handling I0506 17:52:46.289454 6 log.go:172] (0xc001dd40a0) (3) Data frame sent I0506 17:52:46.289466 6 log.go:172] (0xc0007c5ad0) Data frame received for 3 I0506 17:52:46.289476 6 log.go:172] (0xc001dd40a0) (3) Data frame handling I0506 17:52:46.290911 6 log.go:172] (0xc0007c5ad0) Data frame received for 1 I0506 17:52:46.290943 6 log.go:172] (0xc00236a640) (1) Data frame handling I0506 17:52:46.290972 6 log.go:172] (0xc00236a640) (1) Data frame sent I0506 17:52:46.291006 6 log.go:172] (0xc0007c5ad0) (0xc00236a640) Stream removed, broadcasting: 1 I0506 17:52:46.291092 6 log.go:172] (0xc0007c5ad0) Go away received I0506 17:52:46.291173 6 log.go:172] (0xc0007c5ad0) (0xc00236a640) Stream removed, broadcasting: 1 I0506 17:52:46.291234 6 log.go:172] (0xc0007c5ad0) (0xc001dd40a0) Stream removed, broadcasting: 3 I0506 17:52:46.291257 6 log.go:172] (0xc0007c5ad0) (0xc001dd4140) Stream removed, broadcasting: 5 May 6 17:52:46.291: INFO: Found all expected endpoints: [netserver-0] May 6 17:52:46.295: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.1.67:8080/hostName | grep -v '^\s*$'] Namespace:e2e-tests-pod-network-test-cd68f PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 6 17:52:46.295: INFO: >>> kubeConfig: /root/.kube/config I0506 17:52:46.330036 6 log.go:172] (0xc0002fb8c0) (0xc001198780) Create stream I0506 17:52:46.330067 6 log.go:172] (0xc0002fb8c0) (0xc001198780) Stream added, broadcasting: 1 I0506 17:52:46.332960 6 log.go:172] (0xc0002fb8c0) Reply frame received for 1 I0506 17:52:46.333013 6 log.go:172] (0xc0002fb8c0) (0xc00182c000) Create stream I0506 17:52:46.333031 6 log.go:172] (0xc0002fb8c0) (0xc00182c000) Stream added, broadcasting: 3 I0506 17:52:46.334177 6 log.go:172] (0xc0002fb8c0) Reply frame received for 3 I0506 17:52:46.334209 6 log.go:172] (0xc0002fb8c0) (0xc001f223c0) Create stream I0506 17:52:46.334219 6 log.go:172] (0xc0002fb8c0) (0xc001f223c0) Stream added, broadcasting: 5 I0506 17:52:46.335223 6 log.go:172] (0xc0002fb8c0) Reply frame received for 5 I0506 17:52:46.402952 6 log.go:172] (0xc0002fb8c0) Data frame received for 3 I0506 17:52:46.402983 6 log.go:172] (0xc00182c000) (3) Data frame handling I0506 17:52:46.403002 6 log.go:172] (0xc00182c000) (3) Data frame sent I0506 17:52:46.403047 6 log.go:172] (0xc0002fb8c0) Data frame received for 3 I0506 17:52:46.403070 6 log.go:172] (0xc00182c000) (3) Data frame handling I0506 17:52:46.403289 6 log.go:172] (0xc0002fb8c0) Data frame received for 5 I0506 17:52:46.403317 6 log.go:172] (0xc001f223c0) (5) Data frame handling I0506 17:52:46.405737 6 log.go:172] (0xc0002fb8c0) Data frame received for 1 I0506 17:52:46.405769 6 log.go:172] (0xc001198780) (1) Data frame handling I0506 17:52:46.405796 6 log.go:172] (0xc001198780) (1) Data frame sent I0506 17:52:46.405818 6 log.go:172] (0xc0002fb8c0) (0xc001198780) Stream removed, broadcasting: 1 I0506 17:52:46.405840 6 log.go:172] (0xc0002fb8c0) Go away received I0506 17:52:46.405920 6 log.go:172] (0xc0002fb8c0) (0xc001198780) Stream removed, broadcasting: 1 I0506 17:52:46.405935 6 log.go:172] (0xc0002fb8c0) (0xc00182c000) Stream removed, broadcasting: 3 I0506 17:52:46.405942 6 log.go:172] (0xc0002fb8c0) (0xc001f223c0) Stream removed, broadcasting: 5 May 6 17:52:46.405: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 17:52:46.406: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pod-network-test-cd68f" for this suite. May 6 17:53:16.886: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 17:53:17.164: INFO: namespace: e2e-tests-pod-network-test-cd68f, resource: bindings, ignored listing per whitelist May 6 17:53:17.194: INFO: namespace e2e-tests-pod-network-test-cd68f deletion completed in 30.784119133s • [SLOW TEST:59.927 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for node-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] HostPath should give a volume the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 17:53:17.194: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename hostpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37 [It] should give a volume the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test hostPath mode May 6 17:53:17.380: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "e2e-tests-hostpath-jq5tv" to be "success or failure" May 6 17:53:17.483: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 102.654315ms May 6 17:53:19.849: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.468605191s May 6 17:53:22.070: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.689946207s May 6 17:53:24.075: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 6.694387836s May 6 17:53:26.227: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 8.84690077s May 6 17:53:28.235: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.854187286s STEP: Saw pod success May 6 17:53:28.235: INFO: Pod "pod-host-path-test" satisfied condition "success or failure" May 6 17:53:28.244: INFO: Trying to get logs from node hunter-worker pod pod-host-path-test container test-container-1: STEP: delete the pod May 6 17:53:28.503: INFO: Waiting for pod pod-host-path-test to disappear May 6 17:53:28.705: INFO: Pod pod-host-path-test no longer exists [AfterEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 17:53:28.705: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-hostpath-jq5tv" for this suite. May 6 17:53:34.775: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 17:53:34.847: INFO: namespace: e2e-tests-hostpath-jq5tv, resource: bindings, ignored listing per whitelist May 6 17:53:34.856: INFO: namespace e2e-tests-hostpath-jq5tv deletion completed in 6.146993691s • [SLOW TEST:17.662 seconds] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34 should give a volume the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 17:53:34.856: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-81c4a40d-8fc2-11ea-a618-0242ac110019 STEP: Creating a pod to test consume configMaps May 6 17:53:35.278: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-81d06766-8fc2-11ea-a618-0242ac110019" in namespace "e2e-tests-projected-48ff4" to be "success or failure" May 6 17:53:35.291: INFO: Pod "pod-projected-configmaps-81d06766-8fc2-11ea-a618-0242ac110019": Phase="Pending", Reason="", readiness=false. Elapsed: 12.746396ms May 6 17:53:37.294: INFO: Pod "pod-projected-configmaps-81d06766-8fc2-11ea-a618-0242ac110019": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016555773s May 6 17:53:39.299: INFO: Pod "pod-projected-configmaps-81d06766-8fc2-11ea-a618-0242ac110019": Phase="Running", Reason="", readiness=true. Elapsed: 4.020833969s May 6 17:53:41.303: INFO: Pod "pod-projected-configmaps-81d06766-8fc2-11ea-a618-0242ac110019": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.025183679s STEP: Saw pod success May 6 17:53:41.303: INFO: Pod "pod-projected-configmaps-81d06766-8fc2-11ea-a618-0242ac110019" satisfied condition "success or failure" May 6 17:53:41.306: INFO: Trying to get logs from node hunter-worker2 pod pod-projected-configmaps-81d06766-8fc2-11ea-a618-0242ac110019 container projected-configmap-volume-test: STEP: delete the pod May 6 17:53:41.322: INFO: Waiting for pod pod-projected-configmaps-81d06766-8fc2-11ea-a618-0242ac110019 to disappear May 6 17:53:41.340: INFO: Pod pod-projected-configmaps-81d06766-8fc2-11ea-a618-0242ac110019 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 17:53:41.340: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-48ff4" for this suite. May 6 17:53:47.370: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 17:53:47.509: INFO: namespace: e2e-tests-projected-48ff4, resource: bindings, ignored listing per whitelist May 6 17:53:47.533: INFO: namespace e2e-tests-projected-48ff4 deletion completed in 6.189758601s • [SLOW TEST:12.677 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 17:53:47.533: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 6 17:53:47.629: INFO: Creating ReplicaSet my-hostname-basic-89306374-8fc2-11ea-a618-0242ac110019 May 6 17:53:47.645: INFO: Pod name my-hostname-basic-89306374-8fc2-11ea-a618-0242ac110019: Found 0 pods out of 1 May 6 17:53:52.649: INFO: Pod name my-hostname-basic-89306374-8fc2-11ea-a618-0242ac110019: Found 1 pods out of 1 May 6 17:53:52.649: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-89306374-8fc2-11ea-a618-0242ac110019" is running May 6 17:53:52.651: INFO: Pod "my-hostname-basic-89306374-8fc2-11ea-a618-0242ac110019-45g5c" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-06 17:53:47 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-06 17:53:52 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-06 17:53:52 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-06 17:53:47 +0000 UTC Reason: Message:}]) May 6 17:53:52.651: INFO: Trying to dial the pod May 6 17:53:57.663: INFO: Controller my-hostname-basic-89306374-8fc2-11ea-a618-0242ac110019: Got expected result from replica 1 [my-hostname-basic-89306374-8fc2-11ea-a618-0242ac110019-45g5c]: "my-hostname-basic-89306374-8fc2-11ea-a618-0242ac110019-45g5c", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 17:53:57.663: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-replicaset-jcwf9" for this suite. May 6 17:54:03.745: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 17:54:03.949: INFO: namespace: e2e-tests-replicaset-jcwf9, resource: bindings, ignored listing per whitelist May 6 17:54:04.003: INFO: namespace e2e-tests-replicaset-jcwf9 deletion completed in 6.33674972s • [SLOW TEST:16.470 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [k8s.io] Probing container should have monotonically increasing restart count [Slow][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 17:54:04.004: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] should have monotonically increasing restart count [Slow][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-pkxrz May 6 17:54:12.397: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-pkxrz STEP: checking the pod's current state and verifying that restartCount is present May 6 17:54:12.399: INFO: Initial restart count of pod liveness-http is 0 May 6 17:54:29.407: INFO: Restart count of pod e2e-tests-container-probe-pkxrz/liveness-http is now 1 (17.007487356s elapsed) May 6 17:54:51.839: INFO: Restart count of pod e2e-tests-container-probe-pkxrz/liveness-http is now 2 (39.439299776s elapsed) May 6 17:55:10.165: INFO: Restart count of pod e2e-tests-container-probe-pkxrz/liveness-http is now 3 (57.765286367s elapsed) May 6 17:55:32.672: INFO: Restart count of pod e2e-tests-container-probe-pkxrz/liveness-http is now 4 (1m20.27290118s elapsed) May 6 17:56:30.107: INFO: Restart count of pod e2e-tests-container-probe-pkxrz/liveness-http is now 5 (2m17.707949421s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 17:56:30.168: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-pkxrz" for this suite. May 6 17:56:36.306: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 17:56:36.348: INFO: namespace: e2e-tests-container-probe-pkxrz, resource: bindings, ignored listing per whitelist May 6 17:56:36.372: INFO: namespace e2e-tests-container-probe-pkxrz deletion completed in 6.112525032s • [SLOW TEST:152.369 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should have monotonically increasing restart count [Slow][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 17:56:36.372: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-upd-edd50563-8fc2-11ea-a618-0242ac110019 STEP: Creating the pod STEP: Waiting for pod with text data STEP: Waiting for pod with binary data [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 17:56:44.523: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-rpmxq" for this suite. May 6 17:57:08.570: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 17:57:08.600: INFO: namespace: e2e-tests-configmap-rpmxq, resource: bindings, ignored listing per whitelist May 6 17:57:08.641: INFO: namespace e2e-tests-configmap-rpmxq deletion completed in 24.114210838s • [SLOW TEST:32.269 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 17:57:08.642: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-013318c4-8fc3-11ea-a618-0242ac110019 STEP: Creating a pod to test consume configMaps May 6 17:57:08.985: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-0133a3b1-8fc3-11ea-a618-0242ac110019" in namespace "e2e-tests-projected-xzf5z" to be "success or failure" May 6 17:57:09.015: INFO: Pod "pod-projected-configmaps-0133a3b1-8fc3-11ea-a618-0242ac110019": Phase="Pending", Reason="", readiness=false. Elapsed: 30.225349ms May 6 17:57:11.164: INFO: Pod "pod-projected-configmaps-0133a3b1-8fc3-11ea-a618-0242ac110019": Phase="Pending", Reason="", readiness=false. Elapsed: 2.179209465s May 6 17:57:13.168: INFO: Pod "pod-projected-configmaps-0133a3b1-8fc3-11ea-a618-0242ac110019": Phase="Running", Reason="", readiness=true. Elapsed: 4.18332191s May 6 17:57:15.173: INFO: Pod "pod-projected-configmaps-0133a3b1-8fc3-11ea-a618-0242ac110019": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.187917811s STEP: Saw pod success May 6 17:57:15.173: INFO: Pod "pod-projected-configmaps-0133a3b1-8fc3-11ea-a618-0242ac110019" satisfied condition "success or failure" May 6 17:57:15.176: INFO: Trying to get logs from node hunter-worker2 pod pod-projected-configmaps-0133a3b1-8fc3-11ea-a618-0242ac110019 container projected-configmap-volume-test: STEP: delete the pod May 6 17:57:15.284: INFO: Waiting for pod pod-projected-configmaps-0133a3b1-8fc3-11ea-a618-0242ac110019 to disappear May 6 17:57:15.307: INFO: Pod pod-projected-configmaps-0133a3b1-8fc3-11ea-a618-0242ac110019 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 17:57:15.307: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-xzf5z" for this suite. May 6 17:57:27.405: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 17:57:27.434: INFO: namespace: e2e-tests-projected-xzf5z, resource: bindings, ignored listing per whitelist May 6 17:57:27.467: INFO: namespace e2e-tests-projected-xzf5z deletion completed in 12.1573773s • [SLOW TEST:18.826 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 17:57:27.467: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap e2e-tests-configmap-gf66t/configmap-test-0c904d17-8fc3-11ea-a618-0242ac110019 STEP: Creating a pod to test consume configMaps May 6 17:57:28.084: INFO: Waiting up to 5m0s for pod "pod-configmaps-0c9413f8-8fc3-11ea-a618-0242ac110019" in namespace "e2e-tests-configmap-gf66t" to be "success or failure" May 6 17:57:28.134: INFO: Pod "pod-configmaps-0c9413f8-8fc3-11ea-a618-0242ac110019": Phase="Pending", Reason="", readiness=false. Elapsed: 49.856763ms May 6 17:57:30.194: INFO: Pod "pod-configmaps-0c9413f8-8fc3-11ea-a618-0242ac110019": Phase="Pending", Reason="", readiness=false. Elapsed: 2.110066216s May 6 17:57:32.403: INFO: Pod "pod-configmaps-0c9413f8-8fc3-11ea-a618-0242ac110019": Phase="Pending", Reason="", readiness=false. Elapsed: 4.31927594s May 6 17:57:34.406: INFO: Pod "pod-configmaps-0c9413f8-8fc3-11ea-a618-0242ac110019": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.321896249s STEP: Saw pod success May 6 17:57:34.406: INFO: Pod "pod-configmaps-0c9413f8-8fc3-11ea-a618-0242ac110019" satisfied condition "success or failure" May 6 17:57:34.407: INFO: Trying to get logs from node hunter-worker pod pod-configmaps-0c9413f8-8fc3-11ea-a618-0242ac110019 container env-test: STEP: delete the pod May 6 17:57:34.596: INFO: Waiting for pod pod-configmaps-0c9413f8-8fc3-11ea-a618-0242ac110019 to disappear May 6 17:57:34.624: INFO: Pod pod-configmaps-0c9413f8-8fc3-11ea-a618-0242ac110019 no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 17:57:34.624: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-gf66t" for this suite. May 6 17:57:40.645: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 17:57:40.668: INFO: namespace: e2e-tests-configmap-gf66t, resource: bindings, ignored listing per whitelist May 6 17:57:40.743: INFO: namespace e2e-tests-configmap-gf66t deletion completed in 6.116315637s • [SLOW TEST:13.276 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 17:57:40.743: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin May 6 17:57:40.883: INFO: Waiting up to 5m0s for pod "downwardapi-volume-143604bd-8fc3-11ea-a618-0242ac110019" in namespace "e2e-tests-downward-api-sp2qp" to be "success or failure" May 6 17:57:40.900: INFO: Pod "downwardapi-volume-143604bd-8fc3-11ea-a618-0242ac110019": Phase="Pending", Reason="", readiness=false. Elapsed: 16.556204ms May 6 17:57:43.044: INFO: Pod "downwardapi-volume-143604bd-8fc3-11ea-a618-0242ac110019": Phase="Pending", Reason="", readiness=false. Elapsed: 2.161056033s May 6 17:57:45.047: INFO: Pod "downwardapi-volume-143604bd-8fc3-11ea-a618-0242ac110019": Phase="Pending", Reason="", readiness=false. Elapsed: 4.164049335s May 6 17:57:47.050: INFO: Pod "downwardapi-volume-143604bd-8fc3-11ea-a618-0242ac110019": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.167093784s STEP: Saw pod success May 6 17:57:47.051: INFO: Pod "downwardapi-volume-143604bd-8fc3-11ea-a618-0242ac110019" satisfied condition "success or failure" May 6 17:57:47.052: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-143604bd-8fc3-11ea-a618-0242ac110019 container client-container: STEP: delete the pod May 6 17:57:47.102: INFO: Waiting for pod downwardapi-volume-143604bd-8fc3-11ea-a618-0242ac110019 to disappear May 6 17:57:47.350: INFO: Pod downwardapi-volume-143604bd-8fc3-11ea-a618-0242ac110019 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 17:57:47.350: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-sp2qp" for this suite. May 6 17:57:53.371: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 17:57:53.420: INFO: namespace: e2e-tests-downward-api-sp2qp, resource: bindings, ignored listing per whitelist May 6 17:57:53.438: INFO: namespace e2e-tests-downward-api-sp2qp deletion completed in 6.085004096s • [SLOW TEST:12.695 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 17:57:53.439: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 17:58:53.692: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-596dz" for this suite. May 6 17:59:16.099: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 17:59:16.281: INFO: namespace: e2e-tests-container-probe-596dz, resource: bindings, ignored listing per whitelist May 6 17:59:16.322: INFO: namespace e2e-tests-container-probe-596dz deletion completed in 22.553908981s • [SLOW TEST:82.883 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 17:59:16.322: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43 [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod May 6 17:59:16.663: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 17:59:27.548: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-init-container-2r4xc" for this suite. May 6 17:59:36.037: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 17:59:36.089: INFO: namespace: e2e-tests-init-container-2r4xc, resource: bindings, ignored listing per whitelist May 6 17:59:36.102: INFO: namespace e2e-tests-init-container-2r4xc deletion completed in 8.270747242s • [SLOW TEST:19.780 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 17:59:36.103: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-map-5908cbc2-8fc3-11ea-a618-0242ac110019 STEP: Creating a pod to test consume secrets May 6 17:59:36.435: INFO: Waiting up to 5m0s for pod "pod-secrets-590de19c-8fc3-11ea-a618-0242ac110019" in namespace "e2e-tests-secrets-pj8gt" to be "success or failure" May 6 17:59:36.591: INFO: Pod "pod-secrets-590de19c-8fc3-11ea-a618-0242ac110019": Phase="Pending", Reason="", readiness=false. Elapsed: 155.707502ms May 6 17:59:38.595: INFO: Pod "pod-secrets-590de19c-8fc3-11ea-a618-0242ac110019": Phase="Pending", Reason="", readiness=false. Elapsed: 2.15948871s May 6 17:59:40.975: INFO: Pod "pod-secrets-590de19c-8fc3-11ea-a618-0242ac110019": Phase="Pending", Reason="", readiness=false. Elapsed: 4.539549385s May 6 17:59:42.979: INFO: Pod "pod-secrets-590de19c-8fc3-11ea-a618-0242ac110019": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.543384439s STEP: Saw pod success May 6 17:59:42.979: INFO: Pod "pod-secrets-590de19c-8fc3-11ea-a618-0242ac110019" satisfied condition "success or failure" May 6 17:59:42.982: INFO: Trying to get logs from node hunter-worker2 pod pod-secrets-590de19c-8fc3-11ea-a618-0242ac110019 container secret-volume-test: STEP: delete the pod May 6 17:59:42.999: INFO: Waiting for pod pod-secrets-590de19c-8fc3-11ea-a618-0242ac110019 to disappear May 6 17:59:43.003: INFO: Pod pod-secrets-590de19c-8fc3-11ea-a618-0242ac110019 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 17:59:43.004: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-pj8gt" for this suite. May 6 17:59:49.016: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 17:59:49.073: INFO: namespace: e2e-tests-secrets-pj8gt, resource: bindings, ignored listing per whitelist May 6 17:59:49.099: INFO: namespace e2e-tests-secrets-pj8gt deletion completed in 6.092822232s • [SLOW TEST:12.997 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 17:59:49.099: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin May 6 17:59:49.209: INFO: Waiting up to 5m0s for pod "downwardapi-volume-60b249d4-8fc3-11ea-a618-0242ac110019" in namespace "e2e-tests-downward-api-t87r5" to be "success or failure" May 6 17:59:49.226: INFO: Pod "downwardapi-volume-60b249d4-8fc3-11ea-a618-0242ac110019": Phase="Pending", Reason="", readiness=false. Elapsed: 16.19795ms May 6 17:59:51.229: INFO: Pod "downwardapi-volume-60b249d4-8fc3-11ea-a618-0242ac110019": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019589106s May 6 17:59:53.267: INFO: Pod "downwardapi-volume-60b249d4-8fc3-11ea-a618-0242ac110019": Phase="Pending", Reason="", readiness=false. Elapsed: 4.057821393s May 6 17:59:55.271: INFO: Pod "downwardapi-volume-60b249d4-8fc3-11ea-a618-0242ac110019": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.061786279s STEP: Saw pod success May 6 17:59:55.271: INFO: Pod "downwardapi-volume-60b249d4-8fc3-11ea-a618-0242ac110019" satisfied condition "success or failure" May 6 17:59:55.274: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-60b249d4-8fc3-11ea-a618-0242ac110019 container client-container: STEP: delete the pod May 6 17:59:55.713: INFO: Waiting for pod downwardapi-volume-60b249d4-8fc3-11ea-a618-0242ac110019 to disappear May 6 17:59:55.727: INFO: Pod downwardapi-volume-60b249d4-8fc3-11ea-a618-0242ac110019 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 17:59:55.727: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-t87r5" for this suite. May 6 18:00:01.789: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 18:00:01.808: INFO: namespace: e2e-tests-downward-api-t87r5, resource: bindings, ignored listing per whitelist May 6 18:00:01.860: INFO: namespace e2e-tests-downward-api-t87r5 deletion completed in 6.130384486s • [SLOW TEST:12.761 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run job should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 18:00:01.861: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1454 [It] should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine May 6 18:00:01.980: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=e2e-tests-kubectl-lx7r5' May 6 18:00:10.780: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" May 6 18:00:10.780: INFO: stdout: "job.batch/e2e-test-nginx-job created\n" STEP: verifying the job e2e-test-nginx-job was created [AfterEach] [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1459 May 6 18:00:10.855: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete jobs e2e-test-nginx-job --namespace=e2e-tests-kubectl-lx7r5' May 6 18:00:11.064: INFO: stderr: "" May 6 18:00:11.064: INFO: stdout: "job.batch \"e2e-test-nginx-job\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 18:00:11.064: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-lx7r5" for this suite. May 6 18:00:17.154: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 18:00:17.256: INFO: namespace: e2e-tests-kubectl-lx7r5, resource: bindings, ignored listing per whitelist May 6 18:00:17.262: INFO: namespace e2e-tests-kubectl-lx7r5 deletion completed in 6.194872662s • [SLOW TEST:15.402 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 18:00:17.263: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod May 6 18:00:25.907: INFO: Successfully updated pod "pod-update-activedeadlineseconds-717db907-8fc3-11ea-a618-0242ac110019" May 6 18:00:25.907: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-717db907-8fc3-11ea-a618-0242ac110019" in namespace "e2e-tests-pods-wjsg5" to be "terminated due to deadline exceeded" May 6 18:00:26.119: INFO: Pod "pod-update-activedeadlineseconds-717db907-8fc3-11ea-a618-0242ac110019": Phase="Running", Reason="", readiness=true. Elapsed: 211.620082ms May 6 18:00:28.425: INFO: Pod "pod-update-activedeadlineseconds-717db907-8fc3-11ea-a618-0242ac110019": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.517563936s May 6 18:00:28.425: INFO: Pod "pod-update-activedeadlineseconds-717db907-8fc3-11ea-a618-0242ac110019" satisfied condition "terminated due to deadline exceeded" [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 18:00:28.425: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-wjsg5" for this suite. May 6 18:00:34.488: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 18:00:34.629: INFO: namespace: e2e-tests-pods-wjsg5, resource: bindings, ignored listing per whitelist May 6 18:00:34.643: INFO: namespace e2e-tests-pods-wjsg5 deletion completed in 6.213846927s • [SLOW TEST:17.380 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 18:00:34.643: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the rc1 STEP: create the rc2 STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well STEP: delete the rc simpletest-rc-to-be-deleted STEP: wait for the rc to be deleted May 6 18:00:47.729: INFO: 5 pods remaining May 6 18:00:47.729: INFO: 5 pods has nil DeletionTimestamp May 6 18:00:47.729: INFO: STEP: Gathering metrics W0506 18:00:52.312213 6 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 6 18:00:52.312: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 18:00:52.312: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-cbls9" for this suite. May 6 18:01:06.336: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 18:01:06.356: INFO: namespace: e2e-tests-gc-cbls9, resource: bindings, ignored listing per whitelist May 6 18:01:06.410: INFO: namespace e2e-tests-gc-cbls9 deletion completed in 14.094410824s • [SLOW TEST:31.767 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 18:01:06.410: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with downward pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod pod-subpath-test-downwardapi-b4ns STEP: Creating a pod to test atomic-volume-subpath May 6 18:01:06.558: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-b4ns" in namespace "e2e-tests-subpath-lm7ct" to be "success or failure" May 6 18:01:06.568: INFO: Pod "pod-subpath-test-downwardapi-b4ns": Phase="Pending", Reason="", readiness=false. Elapsed: 9.754428ms May 6 18:01:08.797: INFO: Pod "pod-subpath-test-downwardapi-b4ns": Phase="Pending", Reason="", readiness=false. Elapsed: 2.23927028s May 6 18:01:10.800: INFO: Pod "pod-subpath-test-downwardapi-b4ns": Phase="Pending", Reason="", readiness=false. Elapsed: 4.242686987s May 6 18:01:12.808: INFO: Pod "pod-subpath-test-downwardapi-b4ns": Phase="Pending", Reason="", readiness=false. Elapsed: 6.249868244s May 6 18:01:14.813: INFO: Pod "pod-subpath-test-downwardapi-b4ns": Phase="Pending", Reason="", readiness=false. Elapsed: 8.255492792s May 6 18:01:16.910: INFO: Pod "pod-subpath-test-downwardapi-b4ns": Phase="Running", Reason="", readiness=false. Elapsed: 10.352318437s May 6 18:01:18.915: INFO: Pod "pod-subpath-test-downwardapi-b4ns": Phase="Running", Reason="", readiness=false. Elapsed: 12.356889042s May 6 18:01:20.919: INFO: Pod "pod-subpath-test-downwardapi-b4ns": Phase="Running", Reason="", readiness=false. Elapsed: 14.361267599s May 6 18:01:23.276: INFO: Pod "pod-subpath-test-downwardapi-b4ns": Phase="Running", Reason="", readiness=false. Elapsed: 16.71787387s May 6 18:01:25.280: INFO: Pod "pod-subpath-test-downwardapi-b4ns": Phase="Running", Reason="", readiness=false. Elapsed: 18.721964942s May 6 18:01:27.285: INFO: Pod "pod-subpath-test-downwardapi-b4ns": Phase="Running", Reason="", readiness=false. Elapsed: 20.726711009s May 6 18:01:29.289: INFO: Pod "pod-subpath-test-downwardapi-b4ns": Phase="Running", Reason="", readiness=false. Elapsed: 22.731017737s May 6 18:01:31.348: INFO: Pod "pod-subpath-test-downwardapi-b4ns": Phase="Running", Reason="", readiness=false. Elapsed: 24.789858147s May 6 18:01:33.352: INFO: Pod "pod-subpath-test-downwardapi-b4ns": Phase="Running", Reason="", readiness=false. Elapsed: 26.794362595s May 6 18:01:35.356: INFO: Pod "pod-subpath-test-downwardapi-b4ns": Phase="Succeeded", Reason="", readiness=false. Elapsed: 28.798166941s STEP: Saw pod success May 6 18:01:35.356: INFO: Pod "pod-subpath-test-downwardapi-b4ns" satisfied condition "success or failure" May 6 18:01:35.359: INFO: Trying to get logs from node hunter-worker2 pod pod-subpath-test-downwardapi-b4ns container test-container-subpath-downwardapi-b4ns: STEP: delete the pod May 6 18:01:35.523: INFO: Waiting for pod pod-subpath-test-downwardapi-b4ns to disappear May 6 18:01:35.563: INFO: Pod pod-subpath-test-downwardapi-b4ns no longer exists STEP: Deleting pod pod-subpath-test-downwardapi-b4ns May 6 18:01:35.563: INFO: Deleting pod "pod-subpath-test-downwardapi-b4ns" in namespace "e2e-tests-subpath-lm7ct" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 18:01:35.565: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-subpath-lm7ct" for this suite. May 6 18:01:49.716: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 18:01:49.782: INFO: namespace: e2e-tests-subpath-lm7ct, resource: bindings, ignored listing per whitelist May 6 18:01:49.787: INFO: namespace e2e-tests-subpath-lm7ct deletion completed in 14.218814782s • [SLOW TEST:43.376 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with downward pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 18:01:49.787: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-projected-all-test-volume-a8e6eb19-8fc3-11ea-a618-0242ac110019 STEP: Creating secret with name secret-projected-all-test-volume-a8e6eae7-8fc3-11ea-a618-0242ac110019 STEP: Creating a pod to test Check all projections for projected volume plugin May 6 18:01:50.386: INFO: Waiting up to 5m0s for pod "projected-volume-a8e6ea7e-8fc3-11ea-a618-0242ac110019" in namespace "e2e-tests-projected-phmp8" to be "success or failure" May 6 18:01:50.419: INFO: Pod "projected-volume-a8e6ea7e-8fc3-11ea-a618-0242ac110019": Phase="Pending", Reason="", readiness=false. Elapsed: 32.945664ms May 6 18:01:52.924: INFO: Pod "projected-volume-a8e6ea7e-8fc3-11ea-a618-0242ac110019": Phase="Pending", Reason="", readiness=false. Elapsed: 2.53803152s May 6 18:01:55.247: INFO: Pod "projected-volume-a8e6ea7e-8fc3-11ea-a618-0242ac110019": Phase="Pending", Reason="", readiness=false. Elapsed: 4.861575746s May 6 18:01:57.252: INFO: Pod "projected-volume-a8e6ea7e-8fc3-11ea-a618-0242ac110019": Phase="Running", Reason="", readiness=true. Elapsed: 6.866200005s May 6 18:01:59.416: INFO: Pod "projected-volume-a8e6ea7e-8fc3-11ea-a618-0242ac110019": Phase="Succeeded", Reason="", readiness=false. Elapsed: 9.029811002s STEP: Saw pod success May 6 18:01:59.416: INFO: Pod "projected-volume-a8e6ea7e-8fc3-11ea-a618-0242ac110019" satisfied condition "success or failure" May 6 18:01:59.420: INFO: Trying to get logs from node hunter-worker2 pod projected-volume-a8e6ea7e-8fc3-11ea-a618-0242ac110019 container projected-all-volume-test: STEP: delete the pod May 6 18:02:00.286: INFO: Waiting for pod projected-volume-a8e6ea7e-8fc3-11ea-a618-0242ac110019 to disappear May 6 18:02:00.786: INFO: Pod projected-volume-a8e6ea7e-8fc3-11ea-a618-0242ac110019 no longer exists [AfterEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 18:02:00.786: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-phmp8" for this suite. May 6 18:02:09.478: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 18:02:09.535: INFO: namespace: e2e-tests-projected-phmp8, resource: bindings, ignored listing per whitelist May 6 18:02:09.552: INFO: namespace e2e-tests-projected-phmp8 deletion completed in 8.76195035s • [SLOW TEST:19.765 seconds] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_combined.go:31 should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 18:02:09.552: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a new configmap STEP: modifying the configmap once STEP: modifying the configmap a second time STEP: deleting the configmap STEP: creating a watch on configmaps from the resource version returned by the first update STEP: Expecting to observe notifications for all changes to the configmap after the first update May 6 18:02:10.986: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:e2e-tests-watch-xs5jt,SelfLink:/api/v1/namespaces/e2e-tests-watch-xs5jt/configmaps/e2e-watch-test-resource-version,UID:b4ca6daa-8fc3-11ea-99e8-0242ac110002,ResourceVersion:9089962,Generation:0,CreationTimestamp:2020-05-06 18:02:10 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} May 6 18:02:10.986: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:e2e-tests-watch-xs5jt,SelfLink:/api/v1/namespaces/e2e-tests-watch-xs5jt/configmaps/e2e-watch-test-resource-version,UID:b4ca6daa-8fc3-11ea-99e8-0242ac110002,ResourceVersion:9089963,Generation:0,CreationTimestamp:2020-05-06 18:02:10 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 18:02:10.986: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-watch-xs5jt" for this suite. May 6 18:02:19.660: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 18:02:19.698: INFO: namespace: e2e-tests-watch-xs5jt, resource: bindings, ignored listing per whitelist May 6 18:02:19.730: INFO: namespace e2e-tests-watch-xs5jt deletion completed in 8.655873291s • [SLOW TEST:10.178 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 18:02:19.731: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test use defaults May 6 18:02:20.490: INFO: Waiting up to 5m0s for pod "client-containers-ba928f3e-8fc3-11ea-a618-0242ac110019" in namespace "e2e-tests-containers-f4jm8" to be "success or failure" May 6 18:02:20.661: INFO: Pod "client-containers-ba928f3e-8fc3-11ea-a618-0242ac110019": Phase="Pending", Reason="", readiness=false. Elapsed: 171.406726ms May 6 18:02:22.695: INFO: Pod "client-containers-ba928f3e-8fc3-11ea-a618-0242ac110019": Phase="Pending", Reason="", readiness=false. Elapsed: 2.205578098s May 6 18:02:24.700: INFO: Pod "client-containers-ba928f3e-8fc3-11ea-a618-0242ac110019": Phase="Pending", Reason="", readiness=false. Elapsed: 4.209979184s May 6 18:02:26.703: INFO: Pod "client-containers-ba928f3e-8fc3-11ea-a618-0242ac110019": Phase="Pending", Reason="", readiness=false. Elapsed: 6.213702821s May 6 18:02:28.708: INFO: Pod "client-containers-ba928f3e-8fc3-11ea-a618-0242ac110019": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.217850053s STEP: Saw pod success May 6 18:02:28.708: INFO: Pod "client-containers-ba928f3e-8fc3-11ea-a618-0242ac110019" satisfied condition "success or failure" May 6 18:02:28.710: INFO: Trying to get logs from node hunter-worker pod client-containers-ba928f3e-8fc3-11ea-a618-0242ac110019 container test-container: STEP: delete the pod May 6 18:02:28.769: INFO: Waiting for pod client-containers-ba928f3e-8fc3-11ea-a618-0242ac110019 to disappear May 6 18:02:28.875: INFO: Pod client-containers-ba928f3e-8fc3-11ea-a618-0242ac110019 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 18:02:28.875: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-containers-f4jm8" for this suite. May 6 18:02:36.903: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 18:02:36.954: INFO: namespace: e2e-tests-containers-f4jm8, resource: bindings, ignored listing per whitelist May 6 18:02:36.972: INFO: namespace e2e-tests-containers-f4jm8 deletion completed in 8.093006684s • [SLOW TEST:17.241 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 18:02:36.972: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with secret that has name projected-secret-test-map-c58c5970-8fc3-11ea-a618-0242ac110019 STEP: Creating a pod to test consume secrets May 6 18:02:39.384: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-c58eee15-8fc3-11ea-a618-0242ac110019" in namespace "e2e-tests-projected-2rcjx" to be "success or failure" May 6 18:02:39.387: INFO: Pod "pod-projected-secrets-c58eee15-8fc3-11ea-a618-0242ac110019": Phase="Pending", Reason="", readiness=false. Elapsed: 2.41046ms May 6 18:02:41.409: INFO: Pod "pod-projected-secrets-c58eee15-8fc3-11ea-a618-0242ac110019": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024690809s May 6 18:02:43.413: INFO: Pod "pod-projected-secrets-c58eee15-8fc3-11ea-a618-0242ac110019": Phase="Pending", Reason="", readiness=false. Elapsed: 4.028085467s May 6 18:02:45.763: INFO: Pod "pod-projected-secrets-c58eee15-8fc3-11ea-a618-0242ac110019": Phase="Pending", Reason="", readiness=false. Elapsed: 6.378138543s May 6 18:02:47.766: INFO: Pod "pod-projected-secrets-c58eee15-8fc3-11ea-a618-0242ac110019": Phase="Running", Reason="", readiness=true. Elapsed: 8.381331233s May 6 18:02:49.770: INFO: Pod "pod-projected-secrets-c58eee15-8fc3-11ea-a618-0242ac110019": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.385823545s STEP: Saw pod success May 6 18:02:49.770: INFO: Pod "pod-projected-secrets-c58eee15-8fc3-11ea-a618-0242ac110019" satisfied condition "success or failure" May 6 18:02:49.773: INFO: Trying to get logs from node hunter-worker2 pod pod-projected-secrets-c58eee15-8fc3-11ea-a618-0242ac110019 container projected-secret-volume-test: STEP: delete the pod May 6 18:02:49.907: INFO: Waiting for pod pod-projected-secrets-c58eee15-8fc3-11ea-a618-0242ac110019 to disappear May 6 18:02:49.943: INFO: Pod pod-projected-secrets-c58eee15-8fc3-11ea-a618-0242ac110019 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 18:02:49.943: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-2rcjx" for this suite. May 6 18:02:58.019: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 18:02:58.042: INFO: namespace: e2e-tests-projected-2rcjx, resource: bindings, ignored listing per whitelist May 6 18:02:58.087: INFO: namespace e2e-tests-projected-2rcjx deletion completed in 8.139807197s • [SLOW TEST:21.115 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 18:02:58.087: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with secret pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod pod-subpath-test-secret-sqwg STEP: Creating a pod to test atomic-volume-subpath May 6 18:02:58.739: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-sqwg" in namespace "e2e-tests-subpath-hdphq" to be "success or failure" May 6 18:02:58.762: INFO: Pod "pod-subpath-test-secret-sqwg": Phase="Pending", Reason="", readiness=false. Elapsed: 22.996901ms May 6 18:03:01.205: INFO: Pod "pod-subpath-test-secret-sqwg": Phase="Pending", Reason="", readiness=false. Elapsed: 2.466300982s May 6 18:03:03.223: INFO: Pod "pod-subpath-test-secret-sqwg": Phase="Pending", Reason="", readiness=false. Elapsed: 4.484012084s May 6 18:03:05.690: INFO: Pod "pod-subpath-test-secret-sqwg": Phase="Pending", Reason="", readiness=false. Elapsed: 6.950655982s May 6 18:03:07.695: INFO: Pod "pod-subpath-test-secret-sqwg": Phase="Pending", Reason="", readiness=false. Elapsed: 8.955578374s May 6 18:03:09.877: INFO: Pod "pod-subpath-test-secret-sqwg": Phase="Pending", Reason="", readiness=false. Elapsed: 11.137424788s May 6 18:03:11.880: INFO: Pod "pod-subpath-test-secret-sqwg": Phase="Running", Reason="", readiness=false. Elapsed: 13.14125824s May 6 18:03:13.919: INFO: Pod "pod-subpath-test-secret-sqwg": Phase="Running", Reason="", readiness=false. Elapsed: 15.179821924s May 6 18:03:15.923: INFO: Pod "pod-subpath-test-secret-sqwg": Phase="Running", Reason="", readiness=false. Elapsed: 17.183723749s May 6 18:03:17.927: INFO: Pod "pod-subpath-test-secret-sqwg": Phase="Running", Reason="", readiness=false. Elapsed: 19.188239552s May 6 18:03:19.932: INFO: Pod "pod-subpath-test-secret-sqwg": Phase="Running", Reason="", readiness=false. Elapsed: 21.192712841s May 6 18:03:21.936: INFO: Pod "pod-subpath-test-secret-sqwg": Phase="Running", Reason="", readiness=false. Elapsed: 23.19713704s May 6 18:03:23.940: INFO: Pod "pod-subpath-test-secret-sqwg": Phase="Running", Reason="", readiness=false. Elapsed: 25.201306653s May 6 18:03:25.945: INFO: Pod "pod-subpath-test-secret-sqwg": Phase="Running", Reason="", readiness=false. Elapsed: 27.205770167s May 6 18:03:27.949: INFO: Pod "pod-subpath-test-secret-sqwg": Phase="Succeeded", Reason="", readiness=false. Elapsed: 29.209967031s STEP: Saw pod success May 6 18:03:27.949: INFO: Pod "pod-subpath-test-secret-sqwg" satisfied condition "success or failure" May 6 18:03:27.952: INFO: Trying to get logs from node hunter-worker2 pod pod-subpath-test-secret-sqwg container test-container-subpath-secret-sqwg: STEP: delete the pod May 6 18:03:28.311: INFO: Waiting for pod pod-subpath-test-secret-sqwg to disappear May 6 18:03:28.404: INFO: Pod pod-subpath-test-secret-sqwg no longer exists STEP: Deleting pod pod-subpath-test-secret-sqwg May 6 18:03:28.404: INFO: Deleting pod "pod-subpath-test-secret-sqwg" in namespace "e2e-tests-subpath-hdphq" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 18:03:28.636: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-subpath-hdphq" for this suite. May 6 18:03:36.813: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 18:03:36.824: INFO: namespace: e2e-tests-subpath-hdphq, resource: bindings, ignored listing per whitelist May 6 18:03:36.948: INFO: namespace e2e-tests-subpath-hdphq deletion completed in 8.307723236s • [SLOW TEST:38.861 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with secret pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-cli] Kubectl client [k8s.io] Guestbook application should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 18:03:36.948: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating all guestbook components May 6 18:03:37.256: INFO: apiVersion: v1 kind: Service metadata: name: redis-slave labels: app: redis role: slave tier: backend spec: ports: - port: 6379 selector: app: redis role: slave tier: backend May 6 18:03:37.256: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-bpwn2' May 6 18:03:38.629: INFO: stderr: "" May 6 18:03:38.629: INFO: stdout: "service/redis-slave created\n" May 6 18:03:38.629: INFO: apiVersion: v1 kind: Service metadata: name: redis-master labels: app: redis role: master tier: backend spec: ports: - port: 6379 targetPort: 6379 selector: app: redis role: master tier: backend May 6 18:03:38.629: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-bpwn2' May 6 18:03:40.388: INFO: stderr: "" May 6 18:03:40.388: INFO: stdout: "service/redis-master created\n" May 6 18:03:40.388: INFO: apiVersion: v1 kind: Service metadata: name: frontend labels: app: guestbook tier: frontend spec: # if your cluster supports it, uncomment the following to automatically create # an external load-balanced IP for the frontend service. # type: LoadBalancer ports: - port: 80 selector: app: guestbook tier: frontend May 6 18:03:40.388: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-bpwn2' May 6 18:03:41.192: INFO: stderr: "" May 6 18:03:41.192: INFO: stdout: "service/frontend created\n" May 6 18:03:41.192: INFO: apiVersion: extensions/v1beta1 kind: Deployment metadata: name: frontend spec: replicas: 3 template: metadata: labels: app: guestbook tier: frontend spec: containers: - name: php-redis image: gcr.io/google-samples/gb-frontend:v6 resources: requests: cpu: 100m memory: 100Mi env: - name: GET_HOSTS_FROM value: dns # If your cluster config does not include a dns service, then to # instead access environment variables to find service host # info, comment out the 'value: dns' line above, and uncomment the # line below: # value: env ports: - containerPort: 80 May 6 18:03:41.192: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-bpwn2' May 6 18:03:41.758: INFO: stderr: "" May 6 18:03:41.758: INFO: stdout: "deployment.extensions/frontend created\n" May 6 18:03:41.758: INFO: apiVersion: extensions/v1beta1 kind: Deployment metadata: name: redis-master spec: replicas: 1 template: metadata: labels: app: redis role: master tier: backend spec: containers: - name: master image: gcr.io/kubernetes-e2e-test-images/redis:1.0 resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 May 6 18:03:41.758: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-bpwn2' May 6 18:03:42.530: INFO: stderr: "" May 6 18:03:42.530: INFO: stdout: "deployment.extensions/redis-master created\n" May 6 18:03:42.530: INFO: apiVersion: extensions/v1beta1 kind: Deployment metadata: name: redis-slave spec: replicas: 2 template: metadata: labels: app: redis role: slave tier: backend spec: containers: - name: slave image: gcr.io/google-samples/gb-redisslave:v3 resources: requests: cpu: 100m memory: 100Mi env: - name: GET_HOSTS_FROM value: dns # If your cluster config does not include a dns service, then to # instead access an environment variable to find the master # service's host, comment out the 'value: dns' line above, and # uncomment the line below: # value: env ports: - containerPort: 6379 May 6 18:03:42.530: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-bpwn2' May 6 18:03:43.055: INFO: stderr: "" May 6 18:03:43.055: INFO: stdout: "deployment.extensions/redis-slave created\n" STEP: validating guestbook app May 6 18:03:43.055: INFO: Waiting for all frontend pods to be Running. May 6 18:03:58.105: INFO: Waiting for frontend to serve content. May 6 18:03:58.159: INFO: Trying to add a new entry to the guestbook. May 6 18:03:58.171: INFO: Verifying that added entry can be retrieved. STEP: using delete to clean up resources May 6 18:03:58.180: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-bpwn2' May 6 18:03:58.456: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 6 18:03:58.456: INFO: stdout: "service \"redis-slave\" force deleted\n" STEP: using delete to clean up resources May 6 18:03:58.456: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-bpwn2' May 6 18:03:58.610: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 6 18:03:58.611: INFO: stdout: "service \"redis-master\" force deleted\n" STEP: using delete to clean up resources May 6 18:03:58.611: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-bpwn2' May 6 18:03:58.792: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 6 18:03:58.792: INFO: stdout: "service \"frontend\" force deleted\n" STEP: using delete to clean up resources May 6 18:03:58.792: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-bpwn2' May 6 18:03:58.931: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 6 18:03:58.931: INFO: stdout: "deployment.extensions \"frontend\" force deleted\n" STEP: using delete to clean up resources May 6 18:03:58.932: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-bpwn2' May 6 18:03:59.065: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 6 18:03:59.065: INFO: stdout: "deployment.extensions \"redis-master\" force deleted\n" STEP: using delete to clean up resources May 6 18:03:59.065: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-bpwn2' May 6 18:04:00.021: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 6 18:04:00.021: INFO: stdout: "deployment.extensions \"redis-slave\" force deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 18:04:00.021: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-bpwn2" for this suite. May 6 18:04:44.392: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 18:04:44.467: INFO: namespace: e2e-tests-kubectl-bpwn2, resource: bindings, ignored listing per whitelist May 6 18:04:44.480: INFO: namespace e2e-tests-kubectl-bpwn2 deletion completed in 44.414618631s • [SLOW TEST:67.532 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Guestbook application /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 18:04:44.480: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0777 on node default medium May 6 18:04:44.886: INFO: Waiting up to 5m0s for pod "pod-10ec23fc-8fc4-11ea-a618-0242ac110019" in namespace "e2e-tests-emptydir-655wb" to be "success or failure" May 6 18:04:44.917: INFO: Pod "pod-10ec23fc-8fc4-11ea-a618-0242ac110019": Phase="Pending", Reason="", readiness=false. Elapsed: 31.560753ms May 6 18:04:46.921: INFO: Pod "pod-10ec23fc-8fc4-11ea-a618-0242ac110019": Phase="Pending", Reason="", readiness=false. Elapsed: 2.035187021s May 6 18:04:49.003: INFO: Pod "pod-10ec23fc-8fc4-11ea-a618-0242ac110019": Phase="Pending", Reason="", readiness=false. Elapsed: 4.116703447s May 6 18:04:51.006: INFO: Pod "pod-10ec23fc-8fc4-11ea-a618-0242ac110019": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.119833292s STEP: Saw pod success May 6 18:04:51.006: INFO: Pod "pod-10ec23fc-8fc4-11ea-a618-0242ac110019" satisfied condition "success or failure" May 6 18:04:51.008: INFO: Trying to get logs from node hunter-worker2 pod pod-10ec23fc-8fc4-11ea-a618-0242ac110019 container test-container: STEP: delete the pod May 6 18:04:51.387: INFO: Waiting for pod pod-10ec23fc-8fc4-11ea-a618-0242ac110019 to disappear May 6 18:04:51.390: INFO: Pod pod-10ec23fc-8fc4-11ea-a618-0242ac110019 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 18:04:51.390: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-655wb" for this suite. May 6 18:04:57.458: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 18:04:57.478: INFO: namespace: e2e-tests-emptydir-655wb, resource: bindings, ignored listing per whitelist May 6 18:04:57.516: INFO: namespace e2e-tests-emptydir-655wb deletion completed in 6.121815123s • [SLOW TEST:13.036 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 18:04:57.516: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-18a918c9-8fc4-11ea-a618-0242ac110019 STEP: Creating a pod to test consume configMaps May 6 18:04:57.904: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-18a9c3cb-8fc4-11ea-a618-0242ac110019" in namespace "e2e-tests-projected-2vb66" to be "success or failure" May 6 18:04:57.992: INFO: Pod "pod-projected-configmaps-18a9c3cb-8fc4-11ea-a618-0242ac110019": Phase="Pending", Reason="", readiness=false. Elapsed: 87.528584ms May 6 18:05:00.297: INFO: Pod "pod-projected-configmaps-18a9c3cb-8fc4-11ea-a618-0242ac110019": Phase="Pending", Reason="", readiness=false. Elapsed: 2.392308884s May 6 18:05:02.300: INFO: Pod "pod-projected-configmaps-18a9c3cb-8fc4-11ea-a618-0242ac110019": Phase="Pending", Reason="", readiness=false. Elapsed: 4.395500522s May 6 18:05:04.365: INFO: Pod "pod-projected-configmaps-18a9c3cb-8fc4-11ea-a618-0242ac110019": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.46085709s STEP: Saw pod success May 6 18:05:04.365: INFO: Pod "pod-projected-configmaps-18a9c3cb-8fc4-11ea-a618-0242ac110019" satisfied condition "success or failure" May 6 18:05:04.368: INFO: Trying to get logs from node hunter-worker2 pod pod-projected-configmaps-18a9c3cb-8fc4-11ea-a618-0242ac110019 container projected-configmap-volume-test: STEP: delete the pod May 6 18:05:04.483: INFO: Waiting for pod pod-projected-configmaps-18a9c3cb-8fc4-11ea-a618-0242ac110019 to disappear May 6 18:05:04.841: INFO: Pod pod-projected-configmaps-18a9c3cb-8fc4-11ea-a618-0242ac110019 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 18:05:04.841: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-2vb66" for this suite. May 6 18:05:13.024: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 18:05:13.076: INFO: namespace: e2e-tests-projected-2vb66, resource: bindings, ignored listing per whitelist May 6 18:05:13.186: INFO: namespace e2e-tests-projected-2vb66 deletion completed in 8.341096062s • [SLOW TEST:15.670 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 18:05:13.186: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-21e5a8c9-8fc4-11ea-a618-0242ac110019 STEP: Creating a pod to test consume secrets May 6 18:05:13.374: INFO: Waiting up to 5m0s for pod "pod-secrets-21e7cf50-8fc4-11ea-a618-0242ac110019" in namespace "e2e-tests-secrets-b947w" to be "success or failure" May 6 18:05:13.488: INFO: Pod "pod-secrets-21e7cf50-8fc4-11ea-a618-0242ac110019": Phase="Pending", Reason="", readiness=false. Elapsed: 114.183936ms May 6 18:05:15.491: INFO: Pod "pod-secrets-21e7cf50-8fc4-11ea-a618-0242ac110019": Phase="Pending", Reason="", readiness=false. Elapsed: 2.117153578s May 6 18:05:17.494: INFO: Pod "pod-secrets-21e7cf50-8fc4-11ea-a618-0242ac110019": Phase="Pending", Reason="", readiness=false. Elapsed: 4.12017978s May 6 18:05:19.528: INFO: Pod "pod-secrets-21e7cf50-8fc4-11ea-a618-0242ac110019": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.153618891s STEP: Saw pod success May 6 18:05:19.528: INFO: Pod "pod-secrets-21e7cf50-8fc4-11ea-a618-0242ac110019" satisfied condition "success or failure" May 6 18:05:19.530: INFO: Trying to get logs from node hunter-worker2 pod pod-secrets-21e7cf50-8fc4-11ea-a618-0242ac110019 container secret-volume-test: STEP: delete the pod May 6 18:05:19.762: INFO: Waiting for pod pod-secrets-21e7cf50-8fc4-11ea-a618-0242ac110019 to disappear May 6 18:05:19.810: INFO: Pod pod-secrets-21e7cf50-8fc4-11ea-a618-0242ac110019 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 18:05:19.810: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-b947w" for this suite. May 6 18:05:26.008: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 18:05:26.046: INFO: namespace: e2e-tests-secrets-b947w, resource: bindings, ignored listing per whitelist May 6 18:05:26.105: INFO: namespace e2e-tests-secrets-b947w deletion completed in 6.291848913s • [SLOW TEST:12.919 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 18:05:26.106: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-29d0eb99-8fc4-11ea-a618-0242ac110019 STEP: Creating a pod to test consume secrets May 6 18:05:26.998: INFO: Waiting up to 5m0s for pod "pod-secrets-2a0722af-8fc4-11ea-a618-0242ac110019" in namespace "e2e-tests-secrets-zzfdb" to be "success or failure" May 6 18:05:27.013: INFO: Pod "pod-secrets-2a0722af-8fc4-11ea-a618-0242ac110019": Phase="Pending", Reason="", readiness=false. Elapsed: 15.460536ms May 6 18:05:29.018: INFO: Pod "pod-secrets-2a0722af-8fc4-11ea-a618-0242ac110019": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019930547s May 6 18:05:31.021: INFO: Pod "pod-secrets-2a0722af-8fc4-11ea-a618-0242ac110019": Phase="Running", Reason="", readiness=true. Elapsed: 4.02361236s May 6 18:05:33.025: INFO: Pod "pod-secrets-2a0722af-8fc4-11ea-a618-0242ac110019": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.027556492s STEP: Saw pod success May 6 18:05:33.025: INFO: Pod "pod-secrets-2a0722af-8fc4-11ea-a618-0242ac110019" satisfied condition "success or failure" May 6 18:05:33.028: INFO: Trying to get logs from node hunter-worker pod pod-secrets-2a0722af-8fc4-11ea-a618-0242ac110019 container secret-volume-test: STEP: delete the pod May 6 18:05:33.050: INFO: Waiting for pod pod-secrets-2a0722af-8fc4-11ea-a618-0242ac110019 to disappear May 6 18:05:33.054: INFO: Pod pod-secrets-2a0722af-8fc4-11ea-a618-0242ac110019 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 18:05:33.054: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-zzfdb" for this suite. May 6 18:05:39.084: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 18:05:39.135: INFO: namespace: e2e-tests-secrets-zzfdb, resource: bindings, ignored listing per whitelist May 6 18:05:39.163: INFO: namespace e2e-tests-secrets-zzfdb deletion completed in 6.104960444s STEP: Destroying namespace "e2e-tests-secret-namespace-4ttm9" for this suite. May 6 18:05:47.177: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 18:05:47.202: INFO: namespace: e2e-tests-secret-namespace-4ttm9, resource: bindings, ignored listing per whitelist May 6 18:05:47.247: INFO: namespace e2e-tests-secret-namespace-4ttm9 deletion completed in 8.084360106s • [SLOW TEST:21.142 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 18:05:47.248: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating the pod May 6 18:05:52.465: INFO: Successfully updated pod "labelsupdate36745375-8fc4-11ea-a618-0242ac110019" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 18:05:54.617: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-dp5f7" for this suite. May 6 18:06:18.944: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 18:06:18.965: INFO: namespace: e2e-tests-projected-dp5f7, resource: bindings, ignored listing per whitelist May 6 18:06:19.015: INFO: namespace e2e-tests-projected-dp5f7 deletion completed in 24.394625604s • [SLOW TEST:31.768 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 18:06:19.016: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74 STEP: Creating service test in namespace e2e-tests-statefulset-v694j [It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Initializing watcher for selector baz=blah,foo=bar STEP: Creating stateful set ss in namespace e2e-tests-statefulset-v694j STEP: Waiting until all stateful set ss replicas will be running in namespace e2e-tests-statefulset-v694j May 6 18:06:20.450: INFO: Found 0 stateful pods, waiting for 1 May 6 18:06:30.454: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod May 6 18:06:30.457: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-v694j ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' May 6 18:06:30.857: INFO: stderr: "I0506 18:06:30.580091 1323 log.go:172] (0xc0007942c0) (0xc00068c780) Create stream\nI0506 18:06:30.580159 1323 log.go:172] (0xc0007942c0) (0xc00068c780) Stream added, broadcasting: 1\nI0506 18:06:30.582983 1323 log.go:172] (0xc0007942c0) Reply frame received for 1\nI0506 18:06:30.583058 1323 log.go:172] (0xc0007942c0) (0xc00068c820) Create stream\nI0506 18:06:30.583108 1323 log.go:172] (0xc0007942c0) (0xc00068c820) Stream added, broadcasting: 3\nI0506 18:06:30.584102 1323 log.go:172] (0xc0007942c0) Reply frame received for 3\nI0506 18:06:30.584176 1323 log.go:172] (0xc0007942c0) (0xc0006185a0) Create stream\nI0506 18:06:30.584197 1323 log.go:172] (0xc0007942c0) (0xc0006185a0) Stream added, broadcasting: 5\nI0506 18:06:30.585330 1323 log.go:172] (0xc0007942c0) Reply frame received for 5\nI0506 18:06:30.847252 1323 log.go:172] (0xc0007942c0) Data frame received for 5\nI0506 18:06:30.847286 1323 log.go:172] (0xc0006185a0) (5) Data frame handling\nI0506 18:06:30.847310 1323 log.go:172] (0xc0007942c0) Data frame received for 3\nI0506 18:06:30.847325 1323 log.go:172] (0xc00068c820) (3) Data frame handling\nI0506 18:06:30.847336 1323 log.go:172] (0xc00068c820) (3) Data frame sent\nI0506 18:06:30.847342 1323 log.go:172] (0xc0007942c0) Data frame received for 3\nI0506 18:06:30.847348 1323 log.go:172] (0xc00068c820) (3) Data frame handling\nI0506 18:06:30.850717 1323 log.go:172] (0xc0007942c0) Data frame received for 1\nI0506 18:06:30.850754 1323 log.go:172] (0xc00068c780) (1) Data frame handling\nI0506 18:06:30.850766 1323 log.go:172] (0xc00068c780) (1) Data frame sent\nI0506 18:06:30.850779 1323 log.go:172] (0xc0007942c0) (0xc00068c780) Stream removed, broadcasting: 1\nI0506 18:06:30.850867 1323 log.go:172] (0xc0007942c0) Go away received\nI0506 18:06:30.850983 1323 log.go:172] (0xc0007942c0) (0xc00068c780) Stream removed, broadcasting: 1\nI0506 18:06:30.851002 1323 log.go:172] (0xc0007942c0) (0xc00068c820) Stream removed, broadcasting: 3\nI0506 18:06:30.851011 1323 log.go:172] (0xc0007942c0) (0xc0006185a0) Stream removed, broadcasting: 5\n" May 6 18:06:30.857: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" May 6 18:06:30.857: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' May 6 18:06:30.861: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true May 6 18:06:41.094: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false May 6 18:06:41.094: INFO: Waiting for statefulset status.replicas updated to 0 May 6 18:06:41.400: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999542s May 6 18:06:42.508: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.853989249s May 6 18:06:43.513: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.74541067s May 6 18:06:44.752: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.740808168s May 6 18:06:45.756: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.502053776s May 6 18:06:46.760: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.498231778s May 6 18:06:47.765: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.493999661s May 6 18:06:48.769: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.488585019s May 6 18:06:49.774: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.484876563s May 6 18:06:50.779: INFO: Verifying statefulset ss doesn't scale past 1 for another 480.102347ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace e2e-tests-statefulset-v694j May 6 18:06:51.908: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-v694j ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 6 18:06:52.142: INFO: stderr: "I0506 18:06:52.080536 1345 log.go:172] (0xc000162790) (0xc0005ed360) Create stream\nI0506 18:06:52.080590 1345 log.go:172] (0xc000162790) (0xc0005ed360) Stream added, broadcasting: 1\nI0506 18:06:52.082329 1345 log.go:172] (0xc000162790) Reply frame received for 1\nI0506 18:06:52.082384 1345 log.go:172] (0xc000162790) (0xc00001a000) Create stream\nI0506 18:06:52.082404 1345 log.go:172] (0xc000162790) (0xc00001a000) Stream added, broadcasting: 3\nI0506 18:06:52.082978 1345 log.go:172] (0xc000162790) Reply frame received for 3\nI0506 18:06:52.083006 1345 log.go:172] (0xc000162790) (0xc000534000) Create stream\nI0506 18:06:52.083016 1345 log.go:172] (0xc000162790) (0xc000534000) Stream added, broadcasting: 5\nI0506 18:06:52.083676 1345 log.go:172] (0xc000162790) Reply frame received for 5\nI0506 18:06:52.137591 1345 log.go:172] (0xc000162790) Data frame received for 5\nI0506 18:06:52.137623 1345 log.go:172] (0xc000534000) (5) Data frame handling\nI0506 18:06:52.137664 1345 log.go:172] (0xc000162790) Data frame received for 3\nI0506 18:06:52.137711 1345 log.go:172] (0xc00001a000) (3) Data frame handling\nI0506 18:06:52.137738 1345 log.go:172] (0xc00001a000) (3) Data frame sent\nI0506 18:06:52.137775 1345 log.go:172] (0xc000162790) Data frame received for 1\nI0506 18:06:52.137806 1345 log.go:172] (0xc0005ed360) (1) Data frame handling\nI0506 18:06:52.137823 1345 log.go:172] (0xc0005ed360) (1) Data frame sent\nI0506 18:06:52.137840 1345 log.go:172] (0xc000162790) (0xc0005ed360) Stream removed, broadcasting: 1\nI0506 18:06:52.137875 1345 log.go:172] (0xc000162790) Data frame received for 3\nI0506 18:06:52.137896 1345 log.go:172] (0xc00001a000) (3) Data frame handling\nI0506 18:06:52.137921 1345 log.go:172] (0xc000162790) Go away received\nI0506 18:06:52.138017 1345 log.go:172] (0xc000162790) (0xc0005ed360) Stream removed, broadcasting: 1\nI0506 18:06:52.138038 1345 log.go:172] (0xc000162790) (0xc00001a000) Stream removed, broadcasting: 3\nI0506 18:06:52.138055 1345 log.go:172] (0xc000162790) (0xc000534000) Stream removed, broadcasting: 5\n" May 6 18:06:52.143: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" May 6 18:06:52.143: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' May 6 18:06:52.146: INFO: Found 1 stateful pods, waiting for 3 May 6 18:07:02.151: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true May 6 18:07:02.152: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true May 6 18:07:02.152: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Verifying that stateful set ss was scaled up in order STEP: Scale down will halt with unhealthy stateful pod May 6 18:07:02.157: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-v694j ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' May 6 18:07:02.472: INFO: stderr: "I0506 18:07:02.288299 1368 log.go:172] (0xc00014c580) (0xc0007be000) Create stream\nI0506 18:07:02.288353 1368 log.go:172] (0xc00014c580) (0xc0007be000) Stream added, broadcasting: 1\nI0506 18:07:02.290718 1368 log.go:172] (0xc00014c580) Reply frame received for 1\nI0506 18:07:02.290761 1368 log.go:172] (0xc00014c580) (0xc00079eb40) Create stream\nI0506 18:07:02.290775 1368 log.go:172] (0xc00014c580) (0xc00079eb40) Stream added, broadcasting: 3\nI0506 18:07:02.291604 1368 log.go:172] (0xc00014c580) Reply frame received for 3\nI0506 18:07:02.291632 1368 log.go:172] (0xc00014c580) (0xc0007be140) Create stream\nI0506 18:07:02.291644 1368 log.go:172] (0xc00014c580) (0xc0007be140) Stream added, broadcasting: 5\nI0506 18:07:02.292497 1368 log.go:172] (0xc00014c580) Reply frame received for 5\nI0506 18:07:02.464815 1368 log.go:172] (0xc00014c580) Data frame received for 3\nI0506 18:07:02.464870 1368 log.go:172] (0xc00079eb40) (3) Data frame handling\nI0506 18:07:02.464911 1368 log.go:172] (0xc00079eb40) (3) Data frame sent\nI0506 18:07:02.464938 1368 log.go:172] (0xc00014c580) Data frame received for 3\nI0506 18:07:02.464953 1368 log.go:172] (0xc00079eb40) (3) Data frame handling\nI0506 18:07:02.465015 1368 log.go:172] (0xc00014c580) Data frame received for 5\nI0506 18:07:02.465063 1368 log.go:172] (0xc0007be140) (5) Data frame handling\nI0506 18:07:02.467217 1368 log.go:172] (0xc00014c580) Data frame received for 1\nI0506 18:07:02.467244 1368 log.go:172] (0xc0007be000) (1) Data frame handling\nI0506 18:07:02.467266 1368 log.go:172] (0xc0007be000) (1) Data frame sent\nI0506 18:07:02.467289 1368 log.go:172] (0xc00014c580) (0xc0007be000) Stream removed, broadcasting: 1\nI0506 18:07:02.467485 1368 log.go:172] (0xc00014c580) (0xc0007be000) Stream removed, broadcasting: 1\nI0506 18:07:02.467509 1368 log.go:172] (0xc00014c580) (0xc00079eb40) Stream removed, broadcasting: 3\nI0506 18:07:02.467525 1368 log.go:172] (0xc00014c580) Go away received\nI0506 18:07:02.467610 1368 log.go:172] (0xc00014c580) (0xc0007be140) Stream removed, broadcasting: 5\n" May 6 18:07:02.472: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" May 6 18:07:02.472: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' May 6 18:07:02.472: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-v694j ss-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' May 6 18:07:02.702: INFO: stderr: "I0506 18:07:02.597352 1390 log.go:172] (0xc0007582c0) (0xc00066e640) Create stream\nI0506 18:07:02.597457 1390 log.go:172] (0xc0007582c0) (0xc00066e640) Stream added, broadcasting: 1\nI0506 18:07:02.600159 1390 log.go:172] (0xc0007582c0) Reply frame received for 1\nI0506 18:07:02.600192 1390 log.go:172] (0xc0007582c0) (0xc0007ded20) Create stream\nI0506 18:07:02.600201 1390 log.go:172] (0xc0007582c0) (0xc0007ded20) Stream added, broadcasting: 3\nI0506 18:07:02.601430 1390 log.go:172] (0xc0007582c0) Reply frame received for 3\nI0506 18:07:02.601481 1390 log.go:172] (0xc0007582c0) (0xc00040a000) Create stream\nI0506 18:07:02.601502 1390 log.go:172] (0xc0007582c0) (0xc00040a000) Stream added, broadcasting: 5\nI0506 18:07:02.602423 1390 log.go:172] (0xc0007582c0) Reply frame received for 5\nI0506 18:07:02.695233 1390 log.go:172] (0xc0007582c0) Data frame received for 3\nI0506 18:07:02.695264 1390 log.go:172] (0xc0007ded20) (3) Data frame handling\nI0506 18:07:02.695285 1390 log.go:172] (0xc0007ded20) (3) Data frame sent\nI0506 18:07:02.695318 1390 log.go:172] (0xc0007582c0) Data frame received for 5\nI0506 18:07:02.695331 1390 log.go:172] (0xc00040a000) (5) Data frame handling\nI0506 18:07:02.695816 1390 log.go:172] (0xc0007582c0) Data frame received for 3\nI0506 18:07:02.695833 1390 log.go:172] (0xc0007ded20) (3) Data frame handling\nI0506 18:07:02.698046 1390 log.go:172] (0xc0007582c0) Data frame received for 1\nI0506 18:07:02.698069 1390 log.go:172] (0xc00066e640) (1) Data frame handling\nI0506 18:07:02.698091 1390 log.go:172] (0xc00066e640) (1) Data frame sent\nI0506 18:07:02.698113 1390 log.go:172] (0xc0007582c0) (0xc00066e640) Stream removed, broadcasting: 1\nI0506 18:07:02.698136 1390 log.go:172] (0xc0007582c0) Go away received\nI0506 18:07:02.698326 1390 log.go:172] (0xc0007582c0) (0xc00066e640) Stream removed, broadcasting: 1\nI0506 18:07:02.698343 1390 log.go:172] (0xc0007582c0) (0xc0007ded20) Stream removed, broadcasting: 3\nI0506 18:07:02.698349 1390 log.go:172] (0xc0007582c0) (0xc00040a000) Stream removed, broadcasting: 5\n" May 6 18:07:02.702: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" May 6 18:07:02.702: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' May 6 18:07:02.702: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-v694j ss-2 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' May 6 18:07:02.995: INFO: stderr: "I0506 18:07:02.823267 1412 log.go:172] (0xc0001386e0) (0xc0005db400) Create stream\nI0506 18:07:02.823322 1412 log.go:172] (0xc0001386e0) (0xc0005db400) Stream added, broadcasting: 1\nI0506 18:07:02.825869 1412 log.go:172] (0xc0001386e0) Reply frame received for 1\nI0506 18:07:02.825907 1412 log.go:172] (0xc0001386e0) (0xc000518000) Create stream\nI0506 18:07:02.825922 1412 log.go:172] (0xc0001386e0) (0xc000518000) Stream added, broadcasting: 3\nI0506 18:07:02.826957 1412 log.go:172] (0xc0001386e0) Reply frame received for 3\nI0506 18:07:02.827003 1412 log.go:172] (0xc0001386e0) (0xc000412000) Create stream\nI0506 18:07:02.827016 1412 log.go:172] (0xc0001386e0) (0xc000412000) Stream added, broadcasting: 5\nI0506 18:07:02.828123 1412 log.go:172] (0xc0001386e0) Reply frame received for 5\nI0506 18:07:02.987063 1412 log.go:172] (0xc0001386e0) Data frame received for 3\nI0506 18:07:02.987111 1412 log.go:172] (0xc000518000) (3) Data frame handling\nI0506 18:07:02.987134 1412 log.go:172] (0xc000518000) (3) Data frame sent\nI0506 18:07:02.987667 1412 log.go:172] (0xc0001386e0) Data frame received for 5\nI0506 18:07:02.987697 1412 log.go:172] (0xc000412000) (5) Data frame handling\nI0506 18:07:02.987725 1412 log.go:172] (0xc0001386e0) Data frame received for 3\nI0506 18:07:02.987748 1412 log.go:172] (0xc000518000) (3) Data frame handling\nI0506 18:07:02.989029 1412 log.go:172] (0xc0001386e0) Data frame received for 1\nI0506 18:07:02.989083 1412 log.go:172] (0xc0005db400) (1) Data frame handling\nI0506 18:07:02.989324 1412 log.go:172] (0xc0005db400) (1) Data frame sent\nI0506 18:07:02.989364 1412 log.go:172] (0xc0001386e0) (0xc0005db400) Stream removed, broadcasting: 1\nI0506 18:07:02.989395 1412 log.go:172] (0xc0001386e0) Go away received\nI0506 18:07:02.989678 1412 log.go:172] (0xc0001386e0) (0xc0005db400) Stream removed, broadcasting: 1\nI0506 18:07:02.989716 1412 log.go:172] (0xc0001386e0) (0xc000518000) Stream removed, broadcasting: 3\nI0506 18:07:02.989747 1412 log.go:172] (0xc0001386e0) (0xc000412000) Stream removed, broadcasting: 5\n" May 6 18:07:02.995: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" May 6 18:07:02.995: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' May 6 18:07:02.995: INFO: Waiting for statefulset status.replicas updated to 0 May 6 18:07:03.009: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 3 May 6 18:07:13.414: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false May 6 18:07:13.415: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false May 6 18:07:13.415: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false May 6 18:07:13.486: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999998987s May 6 18:07:14.926: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.933597668s May 6 18:07:15.959: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.493608749s May 6 18:07:17.118: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.460784102s May 6 18:07:18.163: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.302538056s May 6 18:07:19.189: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.256807219s May 6 18:07:20.195: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.231136593s May 6 18:07:21.199: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.225203856s May 6 18:07:22.205: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.220657198s May 6 18:07:23.556: INFO: Verifying statefulset ss doesn't scale past 3 for another 214.928831ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacee2e-tests-statefulset-v694j May 6 18:07:24.735: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-v694j ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 6 18:07:24.951: INFO: stderr: "I0506 18:07:24.858695 1435 log.go:172] (0xc00013a6e0) (0xc000659360) Create stream\nI0506 18:07:24.858739 1435 log.go:172] (0xc00013a6e0) (0xc000659360) Stream added, broadcasting: 1\nI0506 18:07:24.860651 1435 log.go:172] (0xc00013a6e0) Reply frame received for 1\nI0506 18:07:24.860694 1435 log.go:172] (0xc00013a6e0) (0xc0004a0000) Create stream\nI0506 18:07:24.860712 1435 log.go:172] (0xc00013a6e0) (0xc0004a0000) Stream added, broadcasting: 3\nI0506 18:07:24.863151 1435 log.go:172] (0xc00013a6e0) Reply frame received for 3\nI0506 18:07:24.863178 1435 log.go:172] (0xc00013a6e0) (0xc0004a00a0) Create stream\nI0506 18:07:24.863192 1435 log.go:172] (0xc00013a6e0) (0xc0004a00a0) Stream added, broadcasting: 5\nI0506 18:07:24.864098 1435 log.go:172] (0xc00013a6e0) Reply frame received for 5\nI0506 18:07:24.944651 1435 log.go:172] (0xc00013a6e0) Data frame received for 3\nI0506 18:07:24.944679 1435 log.go:172] (0xc0004a0000) (3) Data frame handling\nI0506 18:07:24.944719 1435 log.go:172] (0xc0004a0000) (3) Data frame sent\nI0506 18:07:24.945217 1435 log.go:172] (0xc00013a6e0) Data frame received for 5\nI0506 18:07:24.945328 1435 log.go:172] (0xc00013a6e0) Data frame received for 3\nI0506 18:07:24.945354 1435 log.go:172] (0xc0004a0000) (3) Data frame handling\nI0506 18:07:24.945375 1435 log.go:172] (0xc0004a00a0) (5) Data frame handling\nI0506 18:07:24.946681 1435 log.go:172] (0xc00013a6e0) Data frame received for 1\nI0506 18:07:24.946715 1435 log.go:172] (0xc000659360) (1) Data frame handling\nI0506 18:07:24.946732 1435 log.go:172] (0xc000659360) (1) Data frame sent\nI0506 18:07:24.946747 1435 log.go:172] (0xc00013a6e0) (0xc000659360) Stream removed, broadcasting: 1\nI0506 18:07:24.946928 1435 log.go:172] (0xc00013a6e0) (0xc000659360) Stream removed, broadcasting: 1\nI0506 18:07:24.946943 1435 log.go:172] (0xc00013a6e0) (0xc0004a0000) Stream removed, broadcasting: 3\nI0506 18:07:24.947127 1435 log.go:172] (0xc00013a6e0) (0xc0004a00a0) Stream removed, broadcasting: 5\n" May 6 18:07:24.952: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" May 6 18:07:24.952: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' May 6 18:07:24.952: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-v694j ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 6 18:07:25.543: INFO: stderr: "I0506 18:07:25.467364 1457 log.go:172] (0xc0008c8210) (0xc0008c45a0) Create stream\nI0506 18:07:25.467428 1457 log.go:172] (0xc0008c8210) (0xc0008c45a0) Stream added, broadcasting: 1\nI0506 18:07:25.469610 1457 log.go:172] (0xc0008c8210) Reply frame received for 1\nI0506 18:07:25.469662 1457 log.go:172] (0xc0008c8210) (0xc0002dcc80) Create stream\nI0506 18:07:25.469679 1457 log.go:172] (0xc0008c8210) (0xc0002dcc80) Stream added, broadcasting: 3\nI0506 18:07:25.470442 1457 log.go:172] (0xc0008c8210) Reply frame received for 3\nI0506 18:07:25.470484 1457 log.go:172] (0xc0008c8210) (0xc0002dcdc0) Create stream\nI0506 18:07:25.470494 1457 log.go:172] (0xc0008c8210) (0xc0002dcdc0) Stream added, broadcasting: 5\nI0506 18:07:25.471188 1457 log.go:172] (0xc0008c8210) Reply frame received for 5\nI0506 18:07:25.536370 1457 log.go:172] (0xc0008c8210) Data frame received for 3\nI0506 18:07:25.536418 1457 log.go:172] (0xc0002dcc80) (3) Data frame handling\nI0506 18:07:25.536432 1457 log.go:172] (0xc0002dcc80) (3) Data frame sent\nI0506 18:07:25.536443 1457 log.go:172] (0xc0008c8210) Data frame received for 3\nI0506 18:07:25.536454 1457 log.go:172] (0xc0002dcc80) (3) Data frame handling\nI0506 18:07:25.536489 1457 log.go:172] (0xc0008c8210) Data frame received for 5\nI0506 18:07:25.536500 1457 log.go:172] (0xc0002dcdc0) (5) Data frame handling\nI0506 18:07:25.538166 1457 log.go:172] (0xc0008c8210) Data frame received for 1\nI0506 18:07:25.538198 1457 log.go:172] (0xc0008c45a0) (1) Data frame handling\nI0506 18:07:25.538248 1457 log.go:172] (0xc0008c45a0) (1) Data frame sent\nI0506 18:07:25.538300 1457 log.go:172] (0xc0008c8210) (0xc0008c45a0) Stream removed, broadcasting: 1\nI0506 18:07:25.538345 1457 log.go:172] (0xc0008c8210) Go away received\nI0506 18:07:25.538567 1457 log.go:172] (0xc0008c8210) (0xc0008c45a0) Stream removed, broadcasting: 1\nI0506 18:07:25.538610 1457 log.go:172] (0xc0008c8210) (0xc0002dcc80) Stream removed, broadcasting: 3\nI0506 18:07:25.538644 1457 log.go:172] (0xc0008c8210) (0xc0002dcdc0) Stream removed, broadcasting: 5\n" May 6 18:07:25.543: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" May 6 18:07:25.543: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' May 6 18:07:25.543: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-v694j ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 6 18:07:25.967: INFO: stderr: "I0506 18:07:25.896316 1480 log.go:172] (0xc00082a2c0) (0xc000726640) Create stream\nI0506 18:07:25.896382 1480 log.go:172] (0xc00082a2c0) (0xc000726640) Stream added, broadcasting: 1\nI0506 18:07:25.898998 1480 log.go:172] (0xc00082a2c0) Reply frame received for 1\nI0506 18:07:25.899048 1480 log.go:172] (0xc00082a2c0) (0xc00067cd20) Create stream\nI0506 18:07:25.899067 1480 log.go:172] (0xc00082a2c0) (0xc00067cd20) Stream added, broadcasting: 3\nI0506 18:07:25.899885 1480 log.go:172] (0xc00082a2c0) Reply frame received for 3\nI0506 18:07:25.899916 1480 log.go:172] (0xc00082a2c0) (0xc0007266e0) Create stream\nI0506 18:07:25.899927 1480 log.go:172] (0xc00082a2c0) (0xc0007266e0) Stream added, broadcasting: 5\nI0506 18:07:25.900821 1480 log.go:172] (0xc00082a2c0) Reply frame received for 5\nI0506 18:07:25.961322 1480 log.go:172] (0xc00082a2c0) Data frame received for 5\nI0506 18:07:25.961487 1480 log.go:172] (0xc00082a2c0) Data frame received for 3\nI0506 18:07:25.961519 1480 log.go:172] (0xc00067cd20) (3) Data frame handling\nI0506 18:07:25.961529 1480 log.go:172] (0xc00067cd20) (3) Data frame sent\nI0506 18:07:25.961537 1480 log.go:172] (0xc00082a2c0) Data frame received for 3\nI0506 18:07:25.961544 1480 log.go:172] (0xc00067cd20) (3) Data frame handling\nI0506 18:07:25.961578 1480 log.go:172] (0xc0007266e0) (5) Data frame handling\nI0506 18:07:25.963165 1480 log.go:172] (0xc00082a2c0) Data frame received for 1\nI0506 18:07:25.963191 1480 log.go:172] (0xc000726640) (1) Data frame handling\nI0506 18:07:25.963210 1480 log.go:172] (0xc000726640) (1) Data frame sent\nI0506 18:07:25.963235 1480 log.go:172] (0xc00082a2c0) (0xc000726640) Stream removed, broadcasting: 1\nI0506 18:07:25.963316 1480 log.go:172] (0xc00082a2c0) Go away received\nI0506 18:07:25.963443 1480 log.go:172] (0xc00082a2c0) (0xc000726640) Stream removed, broadcasting: 1\nI0506 18:07:25.963460 1480 log.go:172] (0xc00082a2c0) (0xc00067cd20) Stream removed, broadcasting: 3\nI0506 18:07:25.963468 1480 log.go:172] (0xc00082a2c0) (0xc0007266e0) Stream removed, broadcasting: 5\n" May 6 18:07:25.967: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" May 6 18:07:25.967: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' May 6 18:07:25.967: INFO: Scaling statefulset ss to 0 STEP: Verifying that stateful set ss was scaled down in reverse order [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85 May 6 18:07:56.051: INFO: Deleting all statefulset in ns e2e-tests-statefulset-v694j May 6 18:07:56.054: INFO: Scaling statefulset ss to 0 May 6 18:07:56.078: INFO: Waiting for statefulset status.replicas updated to 0 May 6 18:07:56.080: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 18:07:56.091: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-statefulset-v694j" for this suite. May 6 18:08:02.144: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 18:08:02.432: INFO: namespace: e2e-tests-statefulset-v694j, resource: bindings, ignored listing per whitelist May 6 18:08:02.485: INFO: namespace e2e-tests-statefulset-v694j deletion completed in 6.388845279s • [SLOW TEST:103.470 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 18:08:02.486: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward api env vars May 6 18:08:02.802: INFO: Waiting up to 5m0s for pod "downward-api-86e883df-8fc4-11ea-a618-0242ac110019" in namespace "e2e-tests-downward-api-v7glb" to be "success or failure" May 6 18:08:02.816: INFO: Pod "downward-api-86e883df-8fc4-11ea-a618-0242ac110019": Phase="Pending", Reason="", readiness=false. Elapsed: 14.102174ms May 6 18:08:04.820: INFO: Pod "downward-api-86e883df-8fc4-11ea-a618-0242ac110019": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018075576s May 6 18:08:06.824: INFO: Pod "downward-api-86e883df-8fc4-11ea-a618-0242ac110019": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.021882864s STEP: Saw pod success May 6 18:08:06.824: INFO: Pod "downward-api-86e883df-8fc4-11ea-a618-0242ac110019" satisfied condition "success or failure" May 6 18:08:06.827: INFO: Trying to get logs from node hunter-worker pod downward-api-86e883df-8fc4-11ea-a618-0242ac110019 container dapi-container: STEP: delete the pod May 6 18:08:06.914: INFO: Waiting for pod downward-api-86e883df-8fc4-11ea-a618-0242ac110019 to disappear May 6 18:08:07.027: INFO: Pod downward-api-86e883df-8fc4-11ea-a618-0242ac110019 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 18:08:07.027: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-v7glb" for this suite. May 6 18:08:13.047: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 18:08:13.063: INFO: namespace: e2e-tests-downward-api-v7glb, resource: bindings, ignored listing per whitelist May 6 18:08:13.132: INFO: namespace e2e-tests-downward-api-v7glb deletion completed in 6.101300772s • [SLOW TEST:10.646 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38 should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 18:08:13.132: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: validating cluster-info May 6 18:08:13.250: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config cluster-info' May 6 18:08:13.379: INFO: stderr: "" May 6 18:08:13.379: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32768\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32768/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 18:08:13.379: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-8wcs6" for this suite. May 6 18:08:19.394: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 18:08:19.439: INFO: namespace: e2e-tests-kubectl-8wcs6, resource: bindings, ignored listing per whitelist May 6 18:08:19.470: INFO: namespace e2e-tests-kubectl-8wcs6 deletion completed in 6.087119312s • [SLOW TEST:6.338 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl cluster-info /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 18:08:19.471: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43 [It] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod May 6 18:08:19.652: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 18:08:28.649: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-init-container-wvqg6" for this suite. May 6 18:08:36.811: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 18:08:36.828: INFO: namespace: e2e-tests-init-container-wvqg6, resource: bindings, ignored listing per whitelist May 6 18:08:36.876: INFO: namespace e2e-tests-init-container-wvqg6 deletion completed in 8.217216818s • [SLOW TEST:17.405 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 18:08:36.877: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test substitution in container's args May 6 18:08:36.970: INFO: Waiting up to 5m0s for pod "var-expansion-9b454460-8fc4-11ea-a618-0242ac110019" in namespace "e2e-tests-var-expansion-xpqsw" to be "success or failure" May 6 18:08:36.973: INFO: Pod "var-expansion-9b454460-8fc4-11ea-a618-0242ac110019": Phase="Pending", Reason="", readiness=false. Elapsed: 2.090926ms May 6 18:08:38.976: INFO: Pod "var-expansion-9b454460-8fc4-11ea-a618-0242ac110019": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005488384s May 6 18:08:40.980: INFO: Pod "var-expansion-9b454460-8fc4-11ea-a618-0242ac110019": Phase="Pending", Reason="", readiness=false. Elapsed: 4.009913858s May 6 18:08:43.063: INFO: Pod "var-expansion-9b454460-8fc4-11ea-a618-0242ac110019": Phase="Pending", Reason="", readiness=false. Elapsed: 6.092219158s May 6 18:08:45.152: INFO: Pod "var-expansion-9b454460-8fc4-11ea-a618-0242ac110019": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.181559224s STEP: Saw pod success May 6 18:08:45.152: INFO: Pod "var-expansion-9b454460-8fc4-11ea-a618-0242ac110019" satisfied condition "success or failure" May 6 18:08:45.155: INFO: Trying to get logs from node hunter-worker2 pod var-expansion-9b454460-8fc4-11ea-a618-0242ac110019 container dapi-container: STEP: delete the pod May 6 18:08:45.517: INFO: Waiting for pod var-expansion-9b454460-8fc4-11ea-a618-0242ac110019 to disappear May 6 18:08:45.523: INFO: Pod var-expansion-9b454460-8fc4-11ea-a618-0242ac110019 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 18:08:45.523: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-var-expansion-xpqsw" for this suite. May 6 18:08:51.538: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 18:08:51.547: INFO: namespace: e2e-tests-var-expansion-xpqsw, resource: bindings, ignored listing per whitelist May 6 18:08:51.616: INFO: namespace e2e-tests-var-expansion-xpqsw deletion completed in 6.089581328s • [SLOW TEST:14.739 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 18:08:51.616: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 6 18:08:58.731: INFO: Waiting up to 5m0s for pod "client-envvars-a83c486d-8fc4-11ea-a618-0242ac110019" in namespace "e2e-tests-pods-m7t2s" to be "success or failure" May 6 18:08:58.931: INFO: Pod "client-envvars-a83c486d-8fc4-11ea-a618-0242ac110019": Phase="Pending", Reason="", readiness=false. Elapsed: 199.968313ms May 6 18:09:00.936: INFO: Pod "client-envvars-a83c486d-8fc4-11ea-a618-0242ac110019": Phase="Pending", Reason="", readiness=false. Elapsed: 2.204477847s May 6 18:09:02.940: INFO: Pod "client-envvars-a83c486d-8fc4-11ea-a618-0242ac110019": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.208761372s STEP: Saw pod success May 6 18:09:02.940: INFO: Pod "client-envvars-a83c486d-8fc4-11ea-a618-0242ac110019" satisfied condition "success or failure" May 6 18:09:02.944: INFO: Trying to get logs from node hunter-worker pod client-envvars-a83c486d-8fc4-11ea-a618-0242ac110019 container env3cont: STEP: delete the pod May 6 18:09:02.972: INFO: Waiting for pod client-envvars-a83c486d-8fc4-11ea-a618-0242ac110019 to disappear May 6 18:09:03.015: INFO: Pod client-envvars-a83c486d-8fc4-11ea-a618-0242ac110019 no longer exists [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 18:09:03.015: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-m7t2s" for this suite. May 6 18:09:45.032: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 18:09:45.283: INFO: namespace: e2e-tests-pods-m7t2s, resource: bindings, ignored listing per whitelist May 6 18:09:45.294: INFO: namespace e2e-tests-pods-m7t2s deletion completed in 42.275734238s • [SLOW TEST:53.678 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 18:09:45.295: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102 [It] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. May 6 18:09:45.432: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 18:09:45.435: INFO: Number of nodes with available pods: 0 May 6 18:09:45.435: INFO: Node hunter-worker is running more than one daemon pod May 6 18:09:46.439: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 18:09:46.442: INFO: Number of nodes with available pods: 0 May 6 18:09:46.442: INFO: Node hunter-worker is running more than one daemon pod May 6 18:09:47.463: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 18:09:47.465: INFO: Number of nodes with available pods: 0 May 6 18:09:47.465: INFO: Node hunter-worker is running more than one daemon pod May 6 18:09:48.668: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 18:09:48.673: INFO: Number of nodes with available pods: 0 May 6 18:09:48.673: INFO: Node hunter-worker is running more than one daemon pod May 6 18:09:49.439: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 18:09:49.443: INFO: Number of nodes with available pods: 0 May 6 18:09:49.443: INFO: Node hunter-worker is running more than one daemon pod May 6 18:09:50.439: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 18:09:50.442: INFO: Number of nodes with available pods: 2 May 6 18:09:50.442: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Stop a daemon pod, check that the daemon pod is revived. May 6 18:09:50.480: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 18:09:50.483: INFO: Number of nodes with available pods: 1 May 6 18:09:50.483: INFO: Node hunter-worker is running more than one daemon pod May 6 18:09:51.488: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 18:09:51.491: INFO: Number of nodes with available pods: 1 May 6 18:09:51.491: INFO: Node hunter-worker is running more than one daemon pod May 6 18:09:52.488: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 18:09:52.491: INFO: Number of nodes with available pods: 1 May 6 18:09:52.491: INFO: Node hunter-worker is running more than one daemon pod May 6 18:09:53.489: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 18:09:53.493: INFO: Number of nodes with available pods: 1 May 6 18:09:53.493: INFO: Node hunter-worker is running more than one daemon pod May 6 18:09:54.488: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 18:09:54.492: INFO: Number of nodes with available pods: 1 May 6 18:09:54.492: INFO: Node hunter-worker is running more than one daemon pod May 6 18:09:55.489: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 18:09:55.493: INFO: Number of nodes with available pods: 1 May 6 18:09:55.493: INFO: Node hunter-worker is running more than one daemon pod May 6 18:09:56.489: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 18:09:56.493: INFO: Number of nodes with available pods: 1 May 6 18:09:56.493: INFO: Node hunter-worker is running more than one daemon pod May 6 18:09:57.487: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 18:09:57.490: INFO: Number of nodes with available pods: 1 May 6 18:09:57.490: INFO: Node hunter-worker is running more than one daemon pod May 6 18:09:58.487: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 18:09:58.490: INFO: Number of nodes with available pods: 1 May 6 18:09:58.490: INFO: Node hunter-worker is running more than one daemon pod May 6 18:09:59.488: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 18:09:59.491: INFO: Number of nodes with available pods: 1 May 6 18:09:59.491: INFO: Node hunter-worker is running more than one daemon pod May 6 18:10:00.488: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 18:10:00.491: INFO: Number of nodes with available pods: 1 May 6 18:10:00.491: INFO: Node hunter-worker is running more than one daemon pod May 6 18:10:02.015: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 18:10:02.100: INFO: Number of nodes with available pods: 1 May 6 18:10:02.100: INFO: Node hunter-worker is running more than one daemon pod May 6 18:10:02.488: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 18:10:02.491: INFO: Number of nodes with available pods: 1 May 6 18:10:02.491: INFO: Node hunter-worker is running more than one daemon pod May 6 18:10:03.650: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 18:10:03.654: INFO: Number of nodes with available pods: 1 May 6 18:10:03.654: INFO: Node hunter-worker is running more than one daemon pod May 6 18:10:04.626: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 18:10:04.630: INFO: Number of nodes with available pods: 1 May 6 18:10:04.630: INFO: Node hunter-worker is running more than one daemon pod May 6 18:10:05.487: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 18:10:05.490: INFO: Number of nodes with available pods: 1 May 6 18:10:05.490: INFO: Node hunter-worker is running more than one daemon pod May 6 18:10:06.488: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 18:10:06.492: INFO: Number of nodes with available pods: 1 May 6 18:10:06.492: INFO: Node hunter-worker is running more than one daemon pod May 6 18:10:07.496: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 18:10:07.499: INFO: Number of nodes with available pods: 2 May 6 18:10:07.499: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-p2h5w, will wait for the garbage collector to delete the pods May 6 18:10:07.566: INFO: Deleting DaemonSet.extensions daemon-set took: 12.371496ms May 6 18:10:07.667: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.18849ms May 6 18:10:11.969: INFO: Number of nodes with available pods: 0 May 6 18:10:11.969: INFO: Number of running nodes: 0, number of available pods: 0 May 6 18:10:11.971: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-p2h5w/daemonsets","resourceVersion":"9091673"},"items":null} May 6 18:10:11.972: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-p2h5w/pods","resourceVersion":"9091673"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 18:10:11.981: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-daemonsets-p2h5w" for this suite. May 6 18:10:18.018: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 18:10:18.025: INFO: namespace: e2e-tests-daemonsets-p2h5w, resource: bindings, ignored listing per whitelist May 6 18:10:18.095: INFO: namespace e2e-tests-daemonsets-p2h5w deletion completed in 6.111139827s • [SLOW TEST:32.801 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 18:10:18.096: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0644 on tmpfs May 6 18:10:18.874: INFO: Waiting up to 5m0s for pod "pod-d801a9b2-8fc4-11ea-a618-0242ac110019" in namespace "e2e-tests-emptydir-889kf" to be "success or failure" May 6 18:10:18.935: INFO: Pod "pod-d801a9b2-8fc4-11ea-a618-0242ac110019": Phase="Pending", Reason="", readiness=false. Elapsed: 60.78486ms May 6 18:10:20.939: INFO: Pod "pod-d801a9b2-8fc4-11ea-a618-0242ac110019": Phase="Pending", Reason="", readiness=false. Elapsed: 2.064666378s May 6 18:10:22.942: INFO: Pod "pod-d801a9b2-8fc4-11ea-a618-0242ac110019": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.067965608s STEP: Saw pod success May 6 18:10:22.943: INFO: Pod "pod-d801a9b2-8fc4-11ea-a618-0242ac110019" satisfied condition "success or failure" May 6 18:10:22.944: INFO: Trying to get logs from node hunter-worker2 pod pod-d801a9b2-8fc4-11ea-a618-0242ac110019 container test-container: STEP: delete the pod May 6 18:10:22.998: INFO: Waiting for pod pod-d801a9b2-8fc4-11ea-a618-0242ac110019 to disappear May 6 18:10:23.122: INFO: Pod pod-d801a9b2-8fc4-11ea-a618-0242ac110019 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 18:10:23.122: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-889kf" for this suite. May 6 18:10:29.156: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 18:10:29.343: INFO: namespace: e2e-tests-emptydir-889kf, resource: bindings, ignored listing per whitelist May 6 18:10:29.349: INFO: namespace e2e-tests-emptydir-889kf deletion completed in 6.223873145s • [SLOW TEST:11.253 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 18:10:29.349: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 6 18:10:29.482: INFO: (0) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 5.97554ms) May 6 18:10:29.485: INFO: (1) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 2.847558ms) May 6 18:10:29.488: INFO: (2) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 2.620607ms) May 6 18:10:29.490: INFO: (3) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 2.512547ms) May 6 18:10:29.493: INFO: (4) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 2.29235ms) May 6 18:10:29.495: INFO: (5) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 2.489817ms) May 6 18:10:29.498: INFO: (6) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 2.453306ms) May 6 18:10:29.500: INFO: (7) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 2.302775ms) May 6 18:10:29.502: INFO: (8) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 2.234472ms) May 6 18:10:29.505: INFO: (9) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 2.616795ms) May 6 18:10:29.507: INFO: (10) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 2.324898ms) May 6 18:10:29.510: INFO: (11) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 2.607535ms) May 6 18:10:29.513: INFO: (12) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 3.309542ms) May 6 18:10:29.516: INFO: (13) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 2.564233ms) May 6 18:10:29.519: INFO: (14) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 3.287011ms) May 6 18:10:29.522: INFO: (15) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 3.160075ms) May 6 18:10:29.525: INFO: (16) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 2.773399ms) May 6 18:10:29.528: INFO: (17) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 2.845867ms) May 6 18:10:29.532: INFO: (18) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 3.516911ms) May 6 18:10:29.535: INFO: (19) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 3.367665ms) [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 18:10:29.535: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-proxy-ln9pp" for this suite. May 6 18:10:35.594: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 18:10:35.657: INFO: namespace: e2e-tests-proxy-ln9pp, resource: bindings, ignored listing per whitelist May 6 18:10:35.678: INFO: namespace e2e-tests-proxy-ln9pp deletion completed in 6.140107397s • [SLOW TEST:6.329 seconds] [sig-network] Proxy /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:56 should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 18:10:35.678: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-e218224e-8fc4-11ea-a618-0242ac110019 STEP: Creating a pod to test consume configMaps May 6 18:10:35.807: INFO: Waiting up to 5m0s for pod "pod-configmaps-e21af6f8-8fc4-11ea-a618-0242ac110019" in namespace "e2e-tests-configmap-btcnp" to be "success or failure" May 6 18:10:35.888: INFO: Pod "pod-configmaps-e21af6f8-8fc4-11ea-a618-0242ac110019": Phase="Pending", Reason="", readiness=false. Elapsed: 81.359761ms May 6 18:10:37.923: INFO: Pod "pod-configmaps-e21af6f8-8fc4-11ea-a618-0242ac110019": Phase="Pending", Reason="", readiness=false. Elapsed: 2.115671469s May 6 18:10:40.115: INFO: Pod "pod-configmaps-e21af6f8-8fc4-11ea-a618-0242ac110019": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.307794276s STEP: Saw pod success May 6 18:10:40.115: INFO: Pod "pod-configmaps-e21af6f8-8fc4-11ea-a618-0242ac110019" satisfied condition "success or failure" May 6 18:10:40.117: INFO: Trying to get logs from node hunter-worker pod pod-configmaps-e21af6f8-8fc4-11ea-a618-0242ac110019 container configmap-volume-test: STEP: delete the pod May 6 18:10:40.241: INFO: Waiting for pod pod-configmaps-e21af6f8-8fc4-11ea-a618-0242ac110019 to disappear May 6 18:10:40.457: INFO: Pod pod-configmaps-e21af6f8-8fc4-11ea-a618-0242ac110019 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 18:10:40.457: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-btcnp" for this suite. May 6 18:10:46.518: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 18:10:46.659: INFO: namespace: e2e-tests-configmap-btcnp, resource: bindings, ignored listing per whitelist May 6 18:10:46.662: INFO: namespace e2e-tests-configmap-btcnp deletion completed in 6.201638711s • [SLOW TEST:10.984 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 18:10:46.662: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on tmpfs should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir volume type on tmpfs May 6 18:10:46.829: INFO: Waiting up to 5m0s for pod "pod-e8a734bb-8fc4-11ea-a618-0242ac110019" in namespace "e2e-tests-emptydir-smlhd" to be "success or failure" May 6 18:10:46.899: INFO: Pod "pod-e8a734bb-8fc4-11ea-a618-0242ac110019": Phase="Pending", Reason="", readiness=false. Elapsed: 70.693775ms May 6 18:10:48.990: INFO: Pod "pod-e8a734bb-8fc4-11ea-a618-0242ac110019": Phase="Pending", Reason="", readiness=false. Elapsed: 2.160968044s May 6 18:10:50.994: INFO: Pod "pod-e8a734bb-8fc4-11ea-a618-0242ac110019": Phase="Running", Reason="", readiness=true. Elapsed: 4.165199126s May 6 18:10:52.998: INFO: Pod "pod-e8a734bb-8fc4-11ea-a618-0242ac110019": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.169640678s STEP: Saw pod success May 6 18:10:52.998: INFO: Pod "pod-e8a734bb-8fc4-11ea-a618-0242ac110019" satisfied condition "success or failure" May 6 18:10:53.001: INFO: Trying to get logs from node hunter-worker2 pod pod-e8a734bb-8fc4-11ea-a618-0242ac110019 container test-container: STEP: delete the pod May 6 18:10:54.410: INFO: Waiting for pod pod-e8a734bb-8fc4-11ea-a618-0242ac110019 to disappear May 6 18:10:54.428: INFO: Pod pod-e8a734bb-8fc4-11ea-a618-0242ac110019 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 18:10:54.428: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-smlhd" for this suite. May 6 18:11:00.673: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 18:11:00.771: INFO: namespace: e2e-tests-emptydir-smlhd, resource: bindings, ignored listing per whitelist May 6 18:11:01.192: INFO: namespace e2e-tests-emptydir-smlhd deletion completed in 6.638173234s • [SLOW TEST:14.531 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 volume on tmpfs should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 18:11:01.193: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-f158c122-8fc4-11ea-a618-0242ac110019 STEP: Creating a pod to test consume configMaps May 6 18:11:01.394: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-f15b269f-8fc4-11ea-a618-0242ac110019" in namespace "e2e-tests-projected-d9496" to be "success or failure" May 6 18:11:01.463: INFO: Pod "pod-projected-configmaps-f15b269f-8fc4-11ea-a618-0242ac110019": Phase="Pending", Reason="", readiness=false. Elapsed: 69.119678ms May 6 18:11:03.468: INFO: Pod "pod-projected-configmaps-f15b269f-8fc4-11ea-a618-0242ac110019": Phase="Pending", Reason="", readiness=false. Elapsed: 2.073736446s May 6 18:11:05.472: INFO: Pod "pod-projected-configmaps-f15b269f-8fc4-11ea-a618-0242ac110019": Phase="Pending", Reason="", readiness=false. Elapsed: 4.077360019s May 6 18:11:07.474: INFO: Pod "pod-projected-configmaps-f15b269f-8fc4-11ea-a618-0242ac110019": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.080169805s STEP: Saw pod success May 6 18:11:07.475: INFO: Pod "pod-projected-configmaps-f15b269f-8fc4-11ea-a618-0242ac110019" satisfied condition "success or failure" May 6 18:11:07.476: INFO: Trying to get logs from node hunter-worker pod pod-projected-configmaps-f15b269f-8fc4-11ea-a618-0242ac110019 container projected-configmap-volume-test: STEP: delete the pod May 6 18:11:07.542: INFO: Waiting for pod pod-projected-configmaps-f15b269f-8fc4-11ea-a618-0242ac110019 to disappear May 6 18:11:07.600: INFO: Pod pod-projected-configmaps-f15b269f-8fc4-11ea-a618-0242ac110019 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 18:11:07.600: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-d9496" for this suite. May 6 18:11:13.790: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 18:11:13.843: INFO: namespace: e2e-tests-projected-d9496, resource: bindings, ignored listing per whitelist May 6 18:11:13.876: INFO: namespace e2e-tests-projected-d9496 deletion completed in 6.272587792s • [SLOW TEST:12.683 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 18:11:13.876: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a watch on configmaps with a certain label STEP: creating a new configmap STEP: modifying the configmap once STEP: changing the label value of the configmap STEP: Expecting to observe a delete notification for the watched object May 6 18:11:14.122: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-zvg2z,SelfLink:/api/v1/namespaces/e2e-tests-watch-zvg2z/configmaps/e2e-watch-test-label-changed,UID:f8eb26e9-8fc4-11ea-99e8-0242ac110002,ResourceVersion:9091931,Generation:0,CreationTimestamp:2020-05-06 18:11:14 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} May 6 18:11:14.122: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-zvg2z,SelfLink:/api/v1/namespaces/e2e-tests-watch-zvg2z/configmaps/e2e-watch-test-label-changed,UID:f8eb26e9-8fc4-11ea-99e8-0242ac110002,ResourceVersion:9091932,Generation:0,CreationTimestamp:2020-05-06 18:11:14 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} May 6 18:11:14.122: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-zvg2z,SelfLink:/api/v1/namespaces/e2e-tests-watch-zvg2z/configmaps/e2e-watch-test-label-changed,UID:f8eb26e9-8fc4-11ea-99e8-0242ac110002,ResourceVersion:9091933,Generation:0,CreationTimestamp:2020-05-06 18:11:14 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements STEP: changing the label value of the configmap back STEP: modifying the configmap a third time STEP: deleting the configmap STEP: Expecting to observe an add notification for the watched object when the label value was restored May 6 18:11:24.279: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-zvg2z,SelfLink:/api/v1/namespaces/e2e-tests-watch-zvg2z/configmaps/e2e-watch-test-label-changed,UID:f8eb26e9-8fc4-11ea-99e8-0242ac110002,ResourceVersion:9091954,Generation:0,CreationTimestamp:2020-05-06 18:11:14 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} May 6 18:11:24.279: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-zvg2z,SelfLink:/api/v1/namespaces/e2e-tests-watch-zvg2z/configmaps/e2e-watch-test-label-changed,UID:f8eb26e9-8fc4-11ea-99e8-0242ac110002,ResourceVersion:9091955,Generation:0,CreationTimestamp:2020-05-06 18:11:14 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} May 6 18:11:24.279: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-zvg2z,SelfLink:/api/v1/namespaces/e2e-tests-watch-zvg2z/configmaps/e2e-watch-test-label-changed,UID:f8eb26e9-8fc4-11ea-99e8-0242ac110002,ResourceVersion:9091956,Generation:0,CreationTimestamp:2020-05-06 18:11:14 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 18:11:24.279: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-watch-zvg2z" for this suite. May 6 18:11:30.334: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 18:11:30.391: INFO: namespace: e2e-tests-watch-zvg2z, resource: bindings, ignored listing per whitelist May 6 18:11:30.410: INFO: namespace e2e-tests-watch-zvg2z deletion completed in 6.110224989s • [SLOW TEST:16.534 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 18:11:30.410: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102 [It] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. May 6 18:11:31.028: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 18:11:31.030: INFO: Number of nodes with available pods: 0 May 6 18:11:31.030: INFO: Node hunter-worker is running more than one daemon pod May 6 18:11:32.099: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 18:11:32.117: INFO: Number of nodes with available pods: 0 May 6 18:11:32.118: INFO: Node hunter-worker is running more than one daemon pod May 6 18:11:33.035: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 18:11:33.039: INFO: Number of nodes with available pods: 0 May 6 18:11:33.039: INFO: Node hunter-worker is running more than one daemon pod May 6 18:11:34.044: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 18:11:34.088: INFO: Number of nodes with available pods: 0 May 6 18:11:34.088: INFO: Node hunter-worker is running more than one daemon pod May 6 18:11:35.035: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 18:11:35.038: INFO: Number of nodes with available pods: 0 May 6 18:11:35.038: INFO: Node hunter-worker is running more than one daemon pod May 6 18:11:36.171: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 18:11:36.493: INFO: Number of nodes with available pods: 0 May 6 18:11:36.493: INFO: Node hunter-worker is running more than one daemon pod May 6 18:11:37.056: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 18:11:37.059: INFO: Number of nodes with available pods: 0 May 6 18:11:37.059: INFO: Node hunter-worker is running more than one daemon pod May 6 18:11:38.036: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 18:11:38.039: INFO: Number of nodes with available pods: 2 May 6 18:11:38.039: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. May 6 18:11:38.453: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 18:11:38.471: INFO: Number of nodes with available pods: 1 May 6 18:11:38.471: INFO: Node hunter-worker2 is running more than one daemon pod May 6 18:11:39.475: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 18:11:39.477: INFO: Number of nodes with available pods: 1 May 6 18:11:39.477: INFO: Node hunter-worker2 is running more than one daemon pod May 6 18:11:40.476: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 18:11:40.479: INFO: Number of nodes with available pods: 1 May 6 18:11:40.479: INFO: Node hunter-worker2 is running more than one daemon pod May 6 18:11:41.536: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 18:11:41.539: INFO: Number of nodes with available pods: 1 May 6 18:11:41.539: INFO: Node hunter-worker2 is running more than one daemon pod May 6 18:11:42.487: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 18:11:42.490: INFO: Number of nodes with available pods: 1 May 6 18:11:42.490: INFO: Node hunter-worker2 is running more than one daemon pod May 6 18:11:43.631: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 18:11:43.635: INFO: Number of nodes with available pods: 1 May 6 18:11:43.635: INFO: Node hunter-worker2 is running more than one daemon pod May 6 18:11:44.475: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 18:11:44.478: INFO: Number of nodes with available pods: 1 May 6 18:11:44.478: INFO: Node hunter-worker2 is running more than one daemon pod May 6 18:11:45.656: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 18:11:45.659: INFO: Number of nodes with available pods: 2 May 6 18:11:45.659: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Wait for the failed daemon pod to be completely deleted. [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-dxws2, will wait for the garbage collector to delete the pods May 6 18:11:45.749: INFO: Deleting DaemonSet.extensions daemon-set took: 6.257807ms May 6 18:11:46.549: INFO: Terminating DaemonSet.extensions daemon-set pods took: 800.554758ms May 6 18:12:02.133: INFO: Number of nodes with available pods: 0 May 6 18:12:02.133: INFO: Number of running nodes: 0, number of available pods: 0 May 6 18:12:02.136: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-dxws2/daemonsets","resourceVersion":"9092087"},"items":null} May 6 18:12:02.138: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-dxws2/pods","resourceVersion":"9092087"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 18:12:02.148: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-daemonsets-dxws2" for this suite. May 6 18:12:12.180: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 18:12:12.208: INFO: namespace: e2e-tests-daemonsets-dxws2, resource: bindings, ignored listing per whitelist May 6 18:12:12.252: INFO: namespace e2e-tests-daemonsets-dxws2 deletion completed in 10.100132946s • [SLOW TEST:41.841 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 18:12:12.252: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on default medium should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir volume type on node default medium May 6 18:12:12.377: INFO: Waiting up to 5m0s for pod "pod-1baaa9c4-8fc5-11ea-a618-0242ac110019" in namespace "e2e-tests-emptydir-9znn9" to be "success or failure" May 6 18:12:12.381: INFO: Pod "pod-1baaa9c4-8fc5-11ea-a618-0242ac110019": Phase="Pending", Reason="", readiness=false. Elapsed: 3.850487ms May 6 18:12:14.505: INFO: Pod "pod-1baaa9c4-8fc5-11ea-a618-0242ac110019": Phase="Pending", Reason="", readiness=false. Elapsed: 2.127973068s May 6 18:12:16.594: INFO: Pod "pod-1baaa9c4-8fc5-11ea-a618-0242ac110019": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.21712729s STEP: Saw pod success May 6 18:12:16.594: INFO: Pod "pod-1baaa9c4-8fc5-11ea-a618-0242ac110019" satisfied condition "success or failure" May 6 18:12:16.597: INFO: Trying to get logs from node hunter-worker pod pod-1baaa9c4-8fc5-11ea-a618-0242ac110019 container test-container: STEP: delete the pod May 6 18:12:16.672: INFO: Waiting for pod pod-1baaa9c4-8fc5-11ea-a618-0242ac110019 to disappear May 6 18:12:16.786: INFO: Pod pod-1baaa9c4-8fc5-11ea-a618-0242ac110019 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 18:12:16.786: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-9znn9" for this suite. May 6 18:12:22.827: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 18:12:22.942: INFO: namespace: e2e-tests-emptydir-9znn9, resource: bindings, ignored listing per whitelist May 6 18:12:22.999: INFO: namespace e2e-tests-emptydir-9znn9 deletion completed in 6.208672275s • [SLOW TEST:10.746 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 volume on default medium should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 18:12:22.999: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43 [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod May 6 18:12:23.195: INFO: PodSpec: initContainers in spec.initContainers May 6 18:13:21.140: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-221e3953-8fc5-11ea-a618-0242ac110019", GenerateName:"", Namespace:"e2e-tests-init-container-789vd", SelfLink:"/api/v1/namespaces/e2e-tests-init-container-789vd/pods/pod-init-221e3953-8fc5-11ea-a618-0242ac110019", UID:"221ed32d-8fc5-11ea-99e8-0242ac110002", ResourceVersion:"9092320", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63724385543, loc:(*time.Location)(0x7950ac0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"195687272"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-nd4bs", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc001bb7640), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-nd4bs", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-nd4bs", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-nd4bs", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc001aa64d8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"hunter-worker2", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0012c8ba0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc001aa6560)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc001aa6580)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc001aa6588), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc001aa658c)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724385543, loc:(*time.Location)(0x7950ac0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724385543, loc:(*time.Location)(0x7950ac0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724385543, loc:(*time.Location)(0x7950ac0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724385543, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.17.0.4", PodIP:"10.244.2.59", StartTime:(*v1.Time)(0xc0016ceee0), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(0xc0016cef80), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc000d737a0)}, Ready:false, RestartCount:3, Image:"docker.io/library/busybox:1.29", ImageID:"docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"containerd://36071c0e5ace143d19446310c42b37e0d5a5f276e2472619ff15ab4173508451"}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0016cefc0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:""}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0016cef40), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:""}}, QOSClass:"Guaranteed"}} [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 18:13:21.140: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-init-container-789vd" for this suite. May 6 18:13:45.844: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 18:13:45.870: INFO: namespace: e2e-tests-init-container-789vd, resource: bindings, ignored listing per whitelist May 6 18:13:45.910: INFO: namespace e2e-tests-init-container-789vd deletion completed in 24.765941561s • [SLOW TEST:82.911 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl rolling-update should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 18:13:45.910: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1358 [It] should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine May 6 18:13:46.140: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=e2e-tests-kubectl-4cms7' May 6 18:13:52.679: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" May 6 18:13:52.679: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n" STEP: verifying the rc e2e-test-nginx-rc was created May 6 18:13:52.782: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 0 spec.replicas 1 status.replicas 0 May 6 18:13:52.804: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0 STEP: rolling-update to same image controller May 6 18:13:52.975: INFO: scanned /root for discovery docs: May 6 18:13:52.975: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update e2e-test-nginx-rc --update-period=1s --image=docker.io/library/nginx:1.14-alpine --image-pull-policy=IfNotPresent --namespace=e2e-tests-kubectl-4cms7' May 6 18:14:14.168: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" May 6 18:14:14.168: INFO: stdout: "Created e2e-test-nginx-rc-1812693f7bc4907bef3556d54c17b57a\nScaling up e2e-test-nginx-rc-1812693f7bc4907bef3556d54c17b57a from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-1812693f7bc4907bef3556d54c17b57a up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-1812693f7bc4907bef3556d54c17b57a to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n" May 6 18:14:14.168: INFO: stdout: "Created e2e-test-nginx-rc-1812693f7bc4907bef3556d54c17b57a\nScaling up e2e-test-nginx-rc-1812693f7bc4907bef3556d54c17b57a from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-1812693f7bc4907bef3556d54c17b57a up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-1812693f7bc4907bef3556d54c17b57a to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n" STEP: waiting for all containers in run=e2e-test-nginx-rc pods to come up. May 6 18:14:14.169: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=e2e-tests-kubectl-4cms7' May 6 18:14:14.939: INFO: stderr: "" May 6 18:14:14.939: INFO: stdout: "e2e-test-nginx-rc-1812693f7bc4907bef3556d54c17b57a-gxzpx " May 6 18:14:14.939: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-1812693f7bc4907bef3556d54c17b57a-gxzpx -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-nginx-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-4cms7' May 6 18:14:15.484: INFO: stderr: "" May 6 18:14:15.484: INFO: stdout: "true" May 6 18:14:15.484: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-1812693f7bc4907bef3556d54c17b57a-gxzpx -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-nginx-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-4cms7' May 6 18:14:16.226: INFO: stderr: "" May 6 18:14:16.226: INFO: stdout: "docker.io/library/nginx:1.14-alpine" May 6 18:14:16.226: INFO: e2e-test-nginx-rc-1812693f7bc4907bef3556d54c17b57a-gxzpx is verified up and running [AfterEach] [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1364 May 6 18:14:16.226: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=e2e-tests-kubectl-4cms7' May 6 18:14:16.682: INFO: stderr: "" May 6 18:14:16.682: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 18:14:16.682: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-4cms7" for this suite. May 6 18:14:42.317: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 18:14:42.334: INFO: namespace: e2e-tests-kubectl-4cms7, resource: bindings, ignored listing per whitelist May 6 18:14:42.396: INFO: namespace e2e-tests-kubectl-4cms7 deletion completed in 24.434664889s • [SLOW TEST:56.486 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 18:14:42.397: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod liveness-exec in namespace e2e-tests-container-probe-rhmnr May 6 18:14:49.818: INFO: Started pod liveness-exec in namespace e2e-tests-container-probe-rhmnr STEP: checking the pod's current state and verifying that restartCount is present May 6 18:14:49.822: INFO: Initial restart count of pod liveness-exec is 0 May 6 18:15:38.520: INFO: Restart count of pod e2e-tests-container-probe-rhmnr/liveness-exec is now 1 (48.698252373s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 18:15:38.615: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-rhmnr" for this suite. May 6 18:15:46.678: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 18:15:46.762: INFO: namespace: e2e-tests-container-probe-rhmnr, resource: bindings, ignored listing per whitelist May 6 18:15:46.780: INFO: namespace e2e-tests-container-probe-rhmnr deletion completed in 8.154232685s • [SLOW TEST:64.383 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 18:15:46.780: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-map-9b88510f-8fc5-11ea-a618-0242ac110019 STEP: Creating a pod to test consume configMaps May 6 18:15:46.920: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-9b8901bd-8fc5-11ea-a618-0242ac110019" in namespace "e2e-tests-projected-dhb7w" to be "success or failure" May 6 18:15:46.936: INFO: Pod "pod-projected-configmaps-9b8901bd-8fc5-11ea-a618-0242ac110019": Phase="Pending", Reason="", readiness=false. Elapsed: 15.186737ms May 6 18:15:48.940: INFO: Pod "pod-projected-configmaps-9b8901bd-8fc5-11ea-a618-0242ac110019": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019640633s May 6 18:15:50.945: INFO: Pod "pod-projected-configmaps-9b8901bd-8fc5-11ea-a618-0242ac110019": Phase="Running", Reason="", readiness=true. Elapsed: 4.024574353s May 6 18:15:52.950: INFO: Pod "pod-projected-configmaps-9b8901bd-8fc5-11ea-a618-0242ac110019": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.029244812s STEP: Saw pod success May 6 18:15:52.950: INFO: Pod "pod-projected-configmaps-9b8901bd-8fc5-11ea-a618-0242ac110019" satisfied condition "success or failure" May 6 18:15:52.953: INFO: Trying to get logs from node hunter-worker2 pod pod-projected-configmaps-9b8901bd-8fc5-11ea-a618-0242ac110019 container projected-configmap-volume-test: STEP: delete the pod May 6 18:15:53.256: INFO: Waiting for pod pod-projected-configmaps-9b8901bd-8fc5-11ea-a618-0242ac110019 to disappear May 6 18:15:53.342: INFO: Pod pod-projected-configmaps-9b8901bd-8fc5-11ea-a618-0242ac110019 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 18:15:53.342: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-dhb7w" for this suite. May 6 18:16:01.478: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 18:16:01.511: INFO: namespace: e2e-tests-projected-dhb7w, resource: bindings, ignored listing per whitelist May 6 18:16:01.583: INFO: namespace e2e-tests-projected-dhb7w deletion completed in 8.237882642s • [SLOW TEST:14.803 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 18:16:01.583: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: starting an echo server on multiple ports STEP: creating replication controller proxy-service-7drtt in namespace e2e-tests-proxy-96hsp I0506 18:16:01.728399 6 runners.go:184] Created replication controller with name: proxy-service-7drtt, namespace: e2e-tests-proxy-96hsp, replica count: 1 I0506 18:16:02.778887 6 runners.go:184] proxy-service-7drtt Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0506 18:16:03.779069 6 runners.go:184] proxy-service-7drtt Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0506 18:16:04.779293 6 runners.go:184] proxy-service-7drtt Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0506 18:16:05.779521 6 runners.go:184] proxy-service-7drtt Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0506 18:16:06.779748 6 runners.go:184] proxy-service-7drtt Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0506 18:16:07.779986 6 runners.go:184] proxy-service-7drtt Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0506 18:16:08.780208 6 runners.go:184] proxy-service-7drtt Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0506 18:16:09.780410 6 runners.go:184] proxy-service-7drtt Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0506 18:16:10.780573 6 runners.go:184] proxy-service-7drtt Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0506 18:16:11.780772 6 runners.go:184] proxy-service-7drtt Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0506 18:16:12.780984 6 runners.go:184] proxy-service-7drtt Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0506 18:16:13.781411 6 runners.go:184] proxy-service-7drtt Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0506 18:16:14.782055 6 runners.go:184] proxy-service-7drtt Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 6 18:16:14.787: INFO: setup took 13.11107118s, starting test cases STEP: running 16 cases, 20 attempts per case, 320 total attempts May 6 18:16:14.793: INFO: (0) /api/v1/namespaces/e2e-tests-proxy-96hsp/pods/http:proxy-service-7drtt-b8hq9:160/proxy/: foo (200; 5.440828ms) May 6 18:16:14.795: INFO: (0) /api/v1/namespaces/e2e-tests-proxy-96hsp/pods/proxy-service-7drtt-b8hq9/proxy/: >> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin May 6 18:16:27.525: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b3bf13f1-8fc5-11ea-a618-0242ac110019" in namespace "e2e-tests-projected-jx8rg" to be "success or failure" May 6 18:16:27.529: INFO: Pod "downwardapi-volume-b3bf13f1-8fc5-11ea-a618-0242ac110019": Phase="Pending", Reason="", readiness=false. Elapsed: 3.461768ms May 6 18:16:29.773: INFO: Pod "downwardapi-volume-b3bf13f1-8fc5-11ea-a618-0242ac110019": Phase="Pending", Reason="", readiness=false. Elapsed: 2.24826806s May 6 18:16:31.777: INFO: Pod "downwardapi-volume-b3bf13f1-8fc5-11ea-a618-0242ac110019": Phase="Pending", Reason="", readiness=false. Elapsed: 4.252055511s May 6 18:16:33.782: INFO: Pod "downwardapi-volume-b3bf13f1-8fc5-11ea-a618-0242ac110019": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.25664013s STEP: Saw pod success May 6 18:16:33.782: INFO: Pod "downwardapi-volume-b3bf13f1-8fc5-11ea-a618-0242ac110019" satisfied condition "success or failure" May 6 18:16:33.785: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-b3bf13f1-8fc5-11ea-a618-0242ac110019 container client-container: STEP: delete the pod May 6 18:16:33.880: INFO: Waiting for pod downwardapi-volume-b3bf13f1-8fc5-11ea-a618-0242ac110019 to disappear May 6 18:16:33.888: INFO: Pod downwardapi-volume-b3bf13f1-8fc5-11ea-a618-0242ac110019 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 18:16:33.888: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-jx8rg" for this suite. May 6 18:16:39.903: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 18:16:39.945: INFO: namespace: e2e-tests-projected-jx8rg, resource: bindings, ignored listing per whitelist May 6 18:16:39.993: INFO: namespace e2e-tests-projected-jx8rg deletion completed in 6.102336914s • [SLOW TEST:12.617 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 18:16:39.994: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0777 on tmpfs May 6 18:16:40.093: INFO: Waiting up to 5m0s for pod "pod-bb3ce6e7-8fc5-11ea-a618-0242ac110019" in namespace "e2e-tests-emptydir-jk8lv" to be "success or failure" May 6 18:16:40.146: INFO: Pod "pod-bb3ce6e7-8fc5-11ea-a618-0242ac110019": Phase="Pending", Reason="", readiness=false. Elapsed: 53.022655ms May 6 18:16:42.176: INFO: Pod "pod-bb3ce6e7-8fc5-11ea-a618-0242ac110019": Phase="Pending", Reason="", readiness=false. Elapsed: 2.0824458s May 6 18:16:44.180: INFO: Pod "pod-bb3ce6e7-8fc5-11ea-a618-0242ac110019": Phase="Pending", Reason="", readiness=false. Elapsed: 4.086727535s May 6 18:16:46.201: INFO: Pod "pod-bb3ce6e7-8fc5-11ea-a618-0242ac110019": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.108066814s STEP: Saw pod success May 6 18:16:46.201: INFO: Pod "pod-bb3ce6e7-8fc5-11ea-a618-0242ac110019" satisfied condition "success or failure" May 6 18:16:46.204: INFO: Trying to get logs from node hunter-worker pod pod-bb3ce6e7-8fc5-11ea-a618-0242ac110019 container test-container: STEP: delete the pod May 6 18:16:46.367: INFO: Waiting for pod pod-bb3ce6e7-8fc5-11ea-a618-0242ac110019 to disappear May 6 18:16:46.373: INFO: Pod pod-bb3ce6e7-8fc5-11ea-a618-0242ac110019 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 18:16:46.373: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-jk8lv" for this suite. May 6 18:16:52.394: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 18:16:52.416: INFO: namespace: e2e-tests-emptydir-jk8lv, resource: bindings, ignored listing per whitelist May 6 18:16:52.467: INFO: namespace e2e-tests-emptydir-jk8lv deletion completed in 6.090366593s • [SLOW TEST:12.473 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 18:16:52.467: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-c2a94d59-8fc5-11ea-a618-0242ac110019 STEP: Creating a pod to test consume secrets May 6 18:16:52.579: INFO: Waiting up to 5m0s for pod "pod-secrets-c2ac6651-8fc5-11ea-a618-0242ac110019" in namespace "e2e-tests-secrets-wh7xl" to be "success or failure" May 6 18:16:52.601: INFO: Pod "pod-secrets-c2ac6651-8fc5-11ea-a618-0242ac110019": Phase="Pending", Reason="", readiness=false. Elapsed: 21.567052ms May 6 18:16:54.605: INFO: Pod "pod-secrets-c2ac6651-8fc5-11ea-a618-0242ac110019": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02569765s May 6 18:16:56.609: INFO: Pod "pod-secrets-c2ac6651-8fc5-11ea-a618-0242ac110019": Phase="Running", Reason="", readiness=true. Elapsed: 4.030055185s May 6 18:16:58.692: INFO: Pod "pod-secrets-c2ac6651-8fc5-11ea-a618-0242ac110019": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.112353165s STEP: Saw pod success May 6 18:16:58.692: INFO: Pod "pod-secrets-c2ac6651-8fc5-11ea-a618-0242ac110019" satisfied condition "success or failure" May 6 18:16:58.695: INFO: Trying to get logs from node hunter-worker2 pod pod-secrets-c2ac6651-8fc5-11ea-a618-0242ac110019 container secret-volume-test: STEP: delete the pod May 6 18:16:58.753: INFO: Waiting for pod pod-secrets-c2ac6651-8fc5-11ea-a618-0242ac110019 to disappear May 6 18:16:58.888: INFO: Pod pod-secrets-c2ac6651-8fc5-11ea-a618-0242ac110019 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 18:16:58.888: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-wh7xl" for this suite. May 6 18:17:04.906: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 18:17:04.990: INFO: namespace: e2e-tests-secrets-wh7xl, resource: bindings, ignored listing per whitelist May 6 18:17:04.994: INFO: namespace e2e-tests-secrets-wh7xl deletion completed in 6.101360146s • [SLOW TEST:12.527 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Proxy server should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 18:17:04.994: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: starting the proxy server May 6 18:17:05.158: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter' STEP: curling proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 18:17:05.247: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-scjxk" for this suite. May 6 18:17:11.263: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 18:17:11.322: INFO: namespace: e2e-tests-kubectl-scjxk, resource: bindings, ignored listing per whitelist May 6 18:17:11.337: INFO: namespace e2e-tests-kubectl-scjxk deletion completed in 6.087234248s • [SLOW TEST:6.343 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Proxy server /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 18:17:11.338: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74 STEP: Creating service test in namespace e2e-tests-statefulset-vk44f [It] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Looking for a node to schedule stateful set and pod STEP: Creating pod with conflicting port in namespace e2e-tests-statefulset-vk44f STEP: Creating statefulset with conflicting port in namespace e2e-tests-statefulset-vk44f STEP: Waiting until pod test-pod will start running in namespace e2e-tests-statefulset-vk44f STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace e2e-tests-statefulset-vk44f May 6 18:17:15.547: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-vk44f, name: ss-0, uid: cfc3ab30-8fc5-11ea-99e8-0242ac110002, status phase: Pending. Waiting for statefulset controller to delete. May 6 18:17:21.246: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-vk44f, name: ss-0, uid: cfc3ab30-8fc5-11ea-99e8-0242ac110002, status phase: Failed. Waiting for statefulset controller to delete. May 6 18:17:21.411: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-vk44f, name: ss-0, uid: cfc3ab30-8fc5-11ea-99e8-0242ac110002, status phase: Failed. Waiting for statefulset controller to delete. May 6 18:17:21.669: INFO: Observed delete event for stateful pod ss-0 in namespace e2e-tests-statefulset-vk44f STEP: Removing pod with conflicting port in namespace e2e-tests-statefulset-vk44f STEP: Waiting when stateful pod ss-0 will be recreated in namespace e2e-tests-statefulset-vk44f and will be in running state [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85 May 6 18:17:26.077: INFO: Deleting all statefulset in ns e2e-tests-statefulset-vk44f May 6 18:17:26.080: INFO: Scaling statefulset ss to 0 May 6 18:17:36.096: INFO: Waiting for statefulset status.replicas updated to 0 May 6 18:17:36.099: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 18:17:36.191: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-statefulset-vk44f" for this suite. May 6 18:17:42.577: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 18:17:42.649: INFO: namespace: e2e-tests-statefulset-vk44f, resource: bindings, ignored listing per whitelist May 6 18:17:42.670: INFO: namespace e2e-tests-statefulset-vk44f deletion completed in 6.475670066s • [SLOW TEST:31.333 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run default should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 18:17:42.670: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1262 [It] should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine May 6 18:17:42.899: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --namespace=e2e-tests-kubectl-s9jcr' May 6 18:17:43.089: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" May 6 18:17:43.089: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n" STEP: verifying the pod controlled by e2e-test-nginx-deployment gets created [AfterEach] [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1268 May 6 18:17:45.123: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=e2e-tests-kubectl-s9jcr' May 6 18:17:45.272: INFO: stderr: "" May 6 18:17:45.272: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 18:17:45.272: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-s9jcr" for this suite. May 6 18:17:51.801: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 18:17:51.817: INFO: namespace: e2e-tests-kubectl-s9jcr, resource: bindings, ignored listing per whitelist May 6 18:17:51.854: INFO: namespace e2e-tests-kubectl-s9jcr deletion completed in 6.561440648s • [SLOW TEST:9.183 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl label should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 18:17:51.854: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1052 STEP: creating the pod May 6 18:17:52.005: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-pb7rm' May 6 18:17:52.265: INFO: stderr: "" May 6 18:17:52.265: INFO: stdout: "pod/pause created\n" May 6 18:17:52.265: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] May 6 18:17:52.265: INFO: Waiting up to 5m0s for pod "pause" in namespace "e2e-tests-kubectl-pb7rm" to be "running and ready" May 6 18:17:52.278: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 12.937334ms May 6 18:17:54.281: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016013589s May 6 18:17:56.285: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 4.019667355s May 6 18:17:56.285: INFO: Pod "pause" satisfied condition "running and ready" May 6 18:17:56.285: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] [It] should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: adding the label testing-label with value testing-label-value to a pod May 6 18:17:56.285: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=e2e-tests-kubectl-pb7rm' May 6 18:17:56.393: INFO: stderr: "" May 6 18:17:56.393: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod has the label testing-label with the value testing-label-value May 6 18:17:56.393: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=e2e-tests-kubectl-pb7rm' May 6 18:17:56.487: INFO: stderr: "" May 6 18:17:56.487: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 4s testing-label-value\n" STEP: removing the label testing-label of a pod May 6 18:17:56.487: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=e2e-tests-kubectl-pb7rm' May 6 18:17:56.645: INFO: stderr: "" May 6 18:17:56.645: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod doesn't have the label testing-label May 6 18:17:56.645: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=e2e-tests-kubectl-pb7rm' May 6 18:17:56.760: INFO: stderr: "" May 6 18:17:56.760: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 4s \n" [AfterEach] [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1059 STEP: using delete to clean up resources May 6 18:17:56.760: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-pb7rm' May 6 18:17:56.929: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 6 18:17:56.929: INFO: stdout: "pod \"pause\" force deleted\n" May 6 18:17:56.929: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=e2e-tests-kubectl-pb7rm' May 6 18:17:57.060: INFO: stderr: "No resources found.\n" May 6 18:17:57.060: INFO: stdout: "" May 6 18:17:57.060: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=e2e-tests-kubectl-pb7rm -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 6 18:17:57.321: INFO: stderr: "" May 6 18:17:57.321: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 18:17:57.321: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-pb7rm" for this suite. May 6 18:18:03.893: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 18:18:03.922: INFO: namespace: e2e-tests-kubectl-pb7rm, resource: bindings, ignored listing per whitelist May 6 18:18:03.967: INFO: namespace e2e-tests-kubectl-pb7rm deletion completed in 6.555986254s • [SLOW TEST:12.113 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 18:18:03.968: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79 May 6 18:18:04.148: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 6 18:18:04.189: INFO: Waiting for terminating namespaces to be deleted... May 6 18:18:04.191: INFO: Logging pods the kubelet thinks is on node hunter-worker before test May 6 18:18:04.195: INFO: kube-proxy-szbng from kube-system started at 2020-03-15 18:23:11 +0000 UTC (1 container statuses recorded) May 6 18:18:04.196: INFO: Container kube-proxy ready: true, restart count 0 May 6 18:18:04.196: INFO: kindnet-54h7m from kube-system started at 2020-03-15 18:23:12 +0000 UTC (1 container statuses recorded) May 6 18:18:04.196: INFO: Container kindnet-cni ready: true, restart count 0 May 6 18:18:04.196: INFO: coredns-54ff9cd656-4h7lb from kube-system started at 2020-03-15 18:23:32 +0000 UTC (1 container statuses recorded) May 6 18:18:04.196: INFO: Container coredns ready: true, restart count 0 May 6 18:18:04.196: INFO: Logging pods the kubelet thinks is on node hunter-worker2 before test May 6 18:18:04.199: INFO: kindnet-mtqrs from kube-system started at 2020-03-15 18:23:12 +0000 UTC (1 container statuses recorded) May 6 18:18:04.199: INFO: Container kindnet-cni ready: true, restart count 0 May 6 18:18:04.199: INFO: coredns-54ff9cd656-8vrkk from kube-system started at 2020-03-15 18:23:32 +0000 UTC (1 container statuses recorded) May 6 18:18:04.199: INFO: Container coredns ready: true, restart count 0 May 6 18:18:04.199: INFO: kube-proxy-s52ll from kube-system started at 2020-03-15 18:23:12 +0000 UTC (1 container statuses recorded) May 6 18:18:04.199: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-efe2047c-8fc5-11ea-a618-0242ac110019 42 STEP: Trying to relaunch the pod, now with labels. STEP: removing the label kubernetes.io/e2e-efe2047c-8fc5-11ea-a618-0242ac110019 off the node hunter-worker2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-efe2047c-8fc5-11ea-a618-0242ac110019 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 18:18:12.713: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-sched-pred-9jgns" for this suite. May 6 18:18:40.811: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 18:18:40.886: INFO: namespace: e2e-tests-sched-pred-9jgns, resource: bindings, ignored listing per whitelist May 6 18:18:40.889: INFO: namespace e2e-tests-sched-pred-9jgns deletion completed in 28.172281401s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70 • [SLOW TEST:36.921 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22 validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 18:18:40.889: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102 [It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 6 18:18:41.042: INFO: Creating simple daemon set daemon-set STEP: Check that daemon pods launch on every node of the cluster. May 6 18:18:41.058: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 18:18:41.060: INFO: Number of nodes with available pods: 0 May 6 18:18:41.060: INFO: Node hunter-worker is running more than one daemon pod May 6 18:18:42.065: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 18:18:42.068: INFO: Number of nodes with available pods: 0 May 6 18:18:42.068: INFO: Node hunter-worker is running more than one daemon pod May 6 18:18:43.405: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 18:18:43.429: INFO: Number of nodes with available pods: 0 May 6 18:18:43.429: INFO: Node hunter-worker is running more than one daemon pod May 6 18:18:44.332: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 18:18:44.335: INFO: Number of nodes with available pods: 0 May 6 18:18:44.336: INFO: Node hunter-worker is running more than one daemon pod May 6 18:18:45.064: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 18:18:45.068: INFO: Number of nodes with available pods: 0 May 6 18:18:45.068: INFO: Node hunter-worker is running more than one daemon pod May 6 18:18:46.111: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 18:18:46.115: INFO: Number of nodes with available pods: 0 May 6 18:18:46.115: INFO: Node hunter-worker is running more than one daemon pod May 6 18:18:47.065: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 18:18:47.069: INFO: Number of nodes with available pods: 2 May 6 18:18:47.069: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Update daemon pods image. STEP: Check that daemon pods images are updated. May 6 18:18:47.100: INFO: Wrong image for pod: daemon-set-8wgvv. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 6 18:18:47.100: INFO: Wrong image for pod: daemon-set-s6dm6. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 6 18:18:47.106: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 18:18:48.111: INFO: Wrong image for pod: daemon-set-8wgvv. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 6 18:18:48.111: INFO: Wrong image for pod: daemon-set-s6dm6. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 6 18:18:48.114: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 18:18:49.237: INFO: Wrong image for pod: daemon-set-8wgvv. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 6 18:18:49.237: INFO: Wrong image for pod: daemon-set-s6dm6. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 6 18:18:49.241: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 18:18:50.110: INFO: Wrong image for pod: daemon-set-8wgvv. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 6 18:18:50.110: INFO: Wrong image for pod: daemon-set-s6dm6. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 6 18:18:50.115: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 18:18:51.111: INFO: Wrong image for pod: daemon-set-8wgvv. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 6 18:18:51.111: INFO: Wrong image for pod: daemon-set-s6dm6. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 6 18:18:51.111: INFO: Pod daemon-set-s6dm6 is not available May 6 18:18:51.115: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 18:18:52.111: INFO: Pod daemon-set-88sr6 is not available May 6 18:18:52.111: INFO: Wrong image for pod: daemon-set-8wgvv. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 6 18:18:52.115: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 18:18:53.111: INFO: Pod daemon-set-88sr6 is not available May 6 18:18:53.111: INFO: Wrong image for pod: daemon-set-8wgvv. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 6 18:18:53.114: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 18:18:54.123: INFO: Pod daemon-set-88sr6 is not available May 6 18:18:54.123: INFO: Wrong image for pod: daemon-set-8wgvv. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 6 18:18:54.127: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 18:18:55.111: INFO: Pod daemon-set-88sr6 is not available May 6 18:18:55.111: INFO: Wrong image for pod: daemon-set-8wgvv. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 6 18:18:55.114: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 18:18:56.110: INFO: Wrong image for pod: daemon-set-8wgvv. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 6 18:18:56.114: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 18:18:57.111: INFO: Wrong image for pod: daemon-set-8wgvv. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 6 18:18:57.115: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 18:18:58.111: INFO: Wrong image for pod: daemon-set-8wgvv. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 6 18:18:58.111: INFO: Pod daemon-set-8wgvv is not available May 6 18:18:58.114: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 18:18:59.110: INFO: Pod daemon-set-xj9rl is not available May 6 18:18:59.114: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node STEP: Check that daemon pods are still running on every node of the cluster. May 6 18:18:59.116: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 18:18:59.119: INFO: Number of nodes with available pods: 1 May 6 18:18:59.119: INFO: Node hunter-worker2 is running more than one daemon pod May 6 18:19:00.136: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 18:19:00.140: INFO: Number of nodes with available pods: 1 May 6 18:19:00.140: INFO: Node hunter-worker2 is running more than one daemon pod May 6 18:19:01.124: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 18:19:01.127: INFO: Number of nodes with available pods: 1 May 6 18:19:01.127: INFO: Node hunter-worker2 is running more than one daemon pod May 6 18:19:02.124: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 18:19:02.127: INFO: Number of nodes with available pods: 2 May 6 18:19:02.127: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-6crfr, will wait for the garbage collector to delete the pods May 6 18:19:02.311: INFO: Deleting DaemonSet.extensions daemon-set took: 118.945867ms May 6 18:19:02.411: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.248109ms May 6 18:19:11.913: INFO: Number of nodes with available pods: 0 May 6 18:19:11.913: INFO: Number of running nodes: 0, number of available pods: 0 May 6 18:19:11.916: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-6crfr/daemonsets","resourceVersion":"9093566"},"items":null} May 6 18:19:11.944: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-6crfr/pods","resourceVersion":"9093567"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 18:19:11.953: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-daemonsets-6crfr" for this suite. May 6 18:19:18.021: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 18:19:18.039: INFO: namespace: e2e-tests-daemonsets-6crfr, resource: bindings, ignored listing per whitelist May 6 18:19:18.097: INFO: namespace e2e-tests-daemonsets-6crfr deletion completed in 6.140828518s • [SLOW TEST:37.208 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 18:19:18.097: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: getting the auto-created API token May 6 18:19:18.759: INFO: created pod pod-service-account-defaultsa May 6 18:19:18.759: INFO: pod pod-service-account-defaultsa service account token volume mount: true May 6 18:19:18.764: INFO: created pod pod-service-account-mountsa May 6 18:19:18.764: INFO: pod pod-service-account-mountsa service account token volume mount: true May 6 18:19:18.806: INFO: created pod pod-service-account-nomountsa May 6 18:19:18.806: INFO: pod pod-service-account-nomountsa service account token volume mount: false May 6 18:19:18.824: INFO: created pod pod-service-account-defaultsa-mountspec May 6 18:19:18.824: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true May 6 18:19:18.873: INFO: created pod pod-service-account-mountsa-mountspec May 6 18:19:18.873: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true May 6 18:19:18.962: INFO: created pod pod-service-account-nomountsa-mountspec May 6 18:19:18.962: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true May 6 18:19:18.969: INFO: created pod pod-service-account-defaultsa-nomountspec May 6 18:19:18.969: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false May 6 18:19:19.004: INFO: created pod pod-service-account-mountsa-nomountspec May 6 18:19:19.004: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false May 6 18:19:19.099: INFO: created pod pod-service-account-nomountsa-nomountspec May 6 18:19:19.099: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 18:19:19.099: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-svcaccounts-dfjt7" for this suite. May 6 18:19:51.225: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 18:19:51.296: INFO: namespace: e2e-tests-svcaccounts-dfjt7, resource: bindings, ignored listing per whitelist May 6 18:19:51.302: INFO: namespace e2e-tests-svcaccounts-dfjt7 deletion completed in 32.199213289s • [SLOW TEST:33.205 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:22 should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-apps] ReplicationController should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 18:19:51.303: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Given a Pod with a 'name' label pod-adoption is created STEP: When a replication controller with a matching selector is created STEP: Then the orphan pod is adopted [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 18:19:58.434: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-replication-controller-v227w" for this suite. May 6 18:20:22.463: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 18:20:22.533: INFO: namespace: e2e-tests-replication-controller-v227w, resource: bindings, ignored listing per whitelist May 6 18:20:22.554: INFO: namespace e2e-tests-replication-controller-v227w deletion completed in 24.117404602s • [SLOW TEST:31.251 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-network] Service endpoints latency should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 18:20:22.554: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svc-latency STEP: Waiting for a default service account to be provisioned in namespace [It] should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating replication controller svc-latency-rc in namespace e2e-tests-svc-latency-z7pgf I0506 18:20:22.675498 6 runners.go:184] Created replication controller with name: svc-latency-rc, namespace: e2e-tests-svc-latency-z7pgf, replica count: 1 I0506 18:20:23.725896 6 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0506 18:20:24.726078 6 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0506 18:20:25.726295 6 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0506 18:20:26.726541 6 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0506 18:20:27.726793 6 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 6 18:20:27.860: INFO: Created: latency-svc-d9qkz May 6 18:20:27.877: INFO: Got endpoints: latency-svc-d9qkz [50.776511ms] May 6 18:20:27.908: INFO: Created: latency-svc-87twj May 6 18:20:27.943: INFO: Got endpoints: latency-svc-87twj [65.842447ms] May 6 18:20:27.960: INFO: Created: latency-svc-d86gb May 6 18:20:27.976: INFO: Got endpoints: latency-svc-d86gb [98.699212ms] May 6 18:20:28.007: INFO: Created: latency-svc-wfdzm May 6 18:20:28.024: INFO: Got endpoints: latency-svc-wfdzm [146.11282ms] May 6 18:20:28.136: INFO: Created: latency-svc-7qsgx May 6 18:20:28.139: INFO: Got endpoints: latency-svc-7qsgx [261.501985ms] May 6 18:20:28.567: INFO: Created: latency-svc-lmd44 May 6 18:20:28.571: INFO: Got endpoints: latency-svc-lmd44 [693.413768ms] May 6 18:20:28.656: INFO: Created: latency-svc-zg5bt May 6 18:20:28.825: INFO: Got endpoints: latency-svc-zg5bt [947.435745ms] May 6 18:20:28.828: INFO: Created: latency-svc-lwrjp May 6 18:20:28.839: INFO: Got endpoints: latency-svc-lwrjp [961.370154ms] May 6 18:20:29.016: INFO: Created: latency-svc-bqbxw May 6 18:20:29.020: INFO: Got endpoints: latency-svc-bqbxw [1.142859829s] May 6 18:20:29.161: INFO: Created: latency-svc-6945q May 6 18:20:29.177: INFO: Got endpoints: latency-svc-6945q [1.299732734s] May 6 18:20:29.217: INFO: Created: latency-svc-82bsc May 6 18:20:29.249: INFO: Got endpoints: latency-svc-82bsc [1.371561154s] May 6 18:20:29.334: INFO: Created: latency-svc-8wvp4 May 6 18:20:29.387: INFO: Got endpoints: latency-svc-8wvp4 [1.509199526s] May 6 18:20:29.477: INFO: Created: latency-svc-f6s2l May 6 18:20:29.481: INFO: Got endpoints: latency-svc-f6s2l [1.603609853s] May 6 18:20:29.514: INFO: Created: latency-svc-f746b May 6 18:20:29.543: INFO: Got endpoints: latency-svc-f746b [1.664964595s] May 6 18:20:29.567: INFO: Created: latency-svc-68g29 May 6 18:20:29.620: INFO: Got endpoints: latency-svc-68g29 [1.74206008s] May 6 18:20:29.647: INFO: Created: latency-svc-vjx5x May 6 18:20:29.662: INFO: Got endpoints: latency-svc-vjx5x [1.784112926s] May 6 18:20:29.686: INFO: Created: latency-svc-86sfn May 6 18:20:29.700: INFO: Got endpoints: latency-svc-86sfn [1.756413765s] May 6 18:20:29.758: INFO: Created: latency-svc-vqxhg May 6 18:20:29.771: INFO: Got endpoints: latency-svc-vqxhg [1.794857433s] May 6 18:20:29.800: INFO: Created: latency-svc-fxm6z May 6 18:20:29.814: INFO: Got endpoints: latency-svc-fxm6z [1.790326741s] May 6 18:20:29.839: INFO: Created: latency-svc-xvdks May 6 18:20:29.856: INFO: Got endpoints: latency-svc-xvdks [1.717033488s] May 6 18:20:29.902: INFO: Created: latency-svc-cn5b4 May 6 18:20:29.910: INFO: Got endpoints: latency-svc-cn5b4 [1.338991885s] May 6 18:20:29.931: INFO: Created: latency-svc-wkjqs May 6 18:20:29.947: INFO: Got endpoints: latency-svc-wkjqs [1.121735837s] May 6 18:20:29.987: INFO: Created: latency-svc-ppcz5 May 6 18:20:29.999: INFO: Got endpoints: latency-svc-ppcz5 [1.160393624s] May 6 18:20:30.040: INFO: Created: latency-svc-26thv May 6 18:20:30.042: INFO: Got endpoints: latency-svc-26thv [1.021960812s] May 6 18:20:30.067: INFO: Created: latency-svc-c2hq6 May 6 18:20:30.084: INFO: Got endpoints: latency-svc-c2hq6 [906.533888ms] May 6 18:20:30.104: INFO: Created: latency-svc-wnldp May 6 18:20:30.132: INFO: Got endpoints: latency-svc-wnldp [882.907017ms] May 6 18:20:30.177: INFO: Created: latency-svc-9rlnb May 6 18:20:30.180: INFO: Got endpoints: latency-svc-9rlnb [793.045574ms] May 6 18:20:30.224: INFO: Created: latency-svc-djqd4 May 6 18:20:30.241: INFO: Got endpoints: latency-svc-djqd4 [759.0997ms] May 6 18:20:30.339: INFO: Created: latency-svc-66ks6 May 6 18:20:30.343: INFO: Got endpoints: latency-svc-66ks6 [799.773967ms] May 6 18:20:30.422: INFO: Created: latency-svc-q5hr6 May 6 18:20:30.488: INFO: Got endpoints: latency-svc-q5hr6 [868.462297ms] May 6 18:20:30.491: INFO: Created: latency-svc-zf72c May 6 18:20:30.499: INFO: Got endpoints: latency-svc-zf72c [837.643272ms] May 6 18:20:30.542: INFO: Created: latency-svc-j7sf9 May 6 18:20:30.577: INFO: Got endpoints: latency-svc-j7sf9 [877.672485ms] May 6 18:20:30.668: INFO: Created: latency-svc-6xr58 May 6 18:20:30.673: INFO: Got endpoints: latency-svc-6xr58 [901.653072ms] May 6 18:20:31.468: INFO: Created: latency-svc-nc6w4 May 6 18:20:31.476: INFO: Got endpoints: latency-svc-nc6w4 [1.661841776s] May 6 18:20:31.511: INFO: Created: latency-svc-x7j95 May 6 18:20:31.676: INFO: Got endpoints: latency-svc-x7j95 [1.819887257s] May 6 18:20:31.720: INFO: Created: latency-svc-vmqhk May 6 18:20:31.754: INFO: Got endpoints: latency-svc-vmqhk [1.844322114s] May 6 18:20:32.352: INFO: Created: latency-svc-7kl7f May 6 18:20:32.356: INFO: Got endpoints: latency-svc-7kl7f [2.409423574s] May 6 18:20:32.645: INFO: Created: latency-svc-tnwnm May 6 18:20:32.950: INFO: Got endpoints: latency-svc-tnwnm [2.950518562s] May 6 18:20:33.000: INFO: Created: latency-svc-wxljs May 6 18:20:33.159: INFO: Got endpoints: latency-svc-wxljs [3.117030686s] May 6 18:20:33.469: INFO: Created: latency-svc-28g67 May 6 18:20:33.807: INFO: Got endpoints: latency-svc-28g67 [3.722837275s] May 6 18:20:33.810: INFO: Created: latency-svc-fz9h9 May 6 18:20:33.822: INFO: Got endpoints: latency-svc-fz9h9 [3.689732824s] May 6 18:20:33.875: INFO: Created: latency-svc-r9jtc May 6 18:20:33.905: INFO: Got endpoints: latency-svc-r9jtc [3.72507256s] May 6 18:20:33.986: INFO: Created: latency-svc-2j66n May 6 18:20:34.002: INFO: Got endpoints: latency-svc-2j66n [3.761470989s] May 6 18:20:34.079: INFO: Created: latency-svc-8xs2b May 6 18:20:34.171: INFO: Got endpoints: latency-svc-8xs2b [3.828429134s] May 6 18:20:34.177: INFO: Created: latency-svc-ktfqm May 6 18:20:34.195: INFO: Got endpoints: latency-svc-ktfqm [3.706438638s] May 6 18:20:34.235: INFO: Created: latency-svc-hz9mx May 6 18:20:34.267: INFO: Got endpoints: latency-svc-hz9mx [3.767394883s] May 6 18:20:34.381: INFO: Created: latency-svc-g7ngh May 6 18:20:34.408: INFO: Got endpoints: latency-svc-g7ngh [3.830072167s] May 6 18:20:34.435: INFO: Created: latency-svc-tfwqn May 6 18:20:34.441: INFO: Got endpoints: latency-svc-tfwqn [3.767754089s] May 6 18:20:34.468: INFO: Created: latency-svc-whng9 May 6 18:20:34.554: INFO: Got endpoints: latency-svc-whng9 [3.078244956s] May 6 18:20:34.580: INFO: Created: latency-svc-4blz2 May 6 18:20:34.628: INFO: Got endpoints: latency-svc-4blz2 [2.951536117s] May 6 18:20:34.768: INFO: Created: latency-svc-vrtcm May 6 18:20:34.784: INFO: Got endpoints: latency-svc-vrtcm [3.029010443s] May 6 18:20:34.817: INFO: Created: latency-svc-c4gb2 May 6 18:20:34.838: INFO: Got endpoints: latency-svc-c4gb2 [2.481885219s] May 6 18:20:34.914: INFO: Created: latency-svc-8bxq7 May 6 18:20:34.916: INFO: Got endpoints: latency-svc-8bxq7 [1.965995909s] May 6 18:20:34.970: INFO: Created: latency-svc-bp4tk May 6 18:20:34.982: INFO: Got endpoints: latency-svc-bp4tk [1.822868708s] May 6 18:20:35.006: INFO: Created: latency-svc-rlxdt May 6 18:20:35.111: INFO: Got endpoints: latency-svc-rlxdt [1.304209853s] May 6 18:20:35.117: INFO: Created: latency-svc-6dxsh May 6 18:20:35.139: INFO: Got endpoints: latency-svc-6dxsh [1.316595125s] May 6 18:20:35.195: INFO: Created: latency-svc-szfxk May 6 18:20:35.260: INFO: Got endpoints: latency-svc-szfxk [1.355198846s] May 6 18:20:35.285: INFO: Created: latency-svc-qldrx May 6 18:20:35.301: INFO: Got endpoints: latency-svc-qldrx [1.299288599s] May 6 18:20:35.347: INFO: Created: latency-svc-nwd85 May 6 18:20:35.406: INFO: Got endpoints: latency-svc-nwd85 [1.234767628s] May 6 18:20:35.412: INFO: Created: latency-svc-cm2db May 6 18:20:35.440: INFO: Got endpoints: latency-svc-cm2db [1.244818944s] May 6 18:20:35.474: INFO: Created: latency-svc-8lrx4 May 6 18:20:35.562: INFO: Got endpoints: latency-svc-8lrx4 [1.294707998s] May 6 18:20:35.824: INFO: Created: latency-svc-s7bk4 May 6 18:20:35.828: INFO: Got endpoints: latency-svc-s7bk4 [1.420263762s] May 6 18:20:36.004: INFO: Created: latency-svc-77v7f May 6 18:20:36.028: INFO: Got endpoints: latency-svc-77v7f [1.586929899s] May 6 18:20:36.069: INFO: Created: latency-svc-mzzdc May 6 18:20:36.148: INFO: Got endpoints: latency-svc-mzzdc [1.593236272s] May 6 18:20:36.226: INFO: Created: latency-svc-dcjll May 6 18:20:36.282: INFO: Got endpoints: latency-svc-dcjll [1.65409131s] May 6 18:20:36.306: INFO: Created: latency-svc-jrv5w May 6 18:20:36.316: INFO: Got endpoints: latency-svc-jrv5w [1.532551688s] May 6 18:20:36.345: INFO: Created: latency-svc-qb9xd May 6 18:20:36.359: INFO: Got endpoints: latency-svc-qb9xd [1.520316288s] May 6 18:20:36.459: INFO: Created: latency-svc-vttpg May 6 18:20:36.461: INFO: Got endpoints: latency-svc-vttpg [1.545050137s] May 6 18:20:36.523: INFO: Created: latency-svc-q9n7c May 6 18:20:36.539: INFO: Got endpoints: latency-svc-q9n7c [1.556334334s] May 6 18:20:36.620: INFO: Created: latency-svc-gj598 May 6 18:20:36.625: INFO: Got endpoints: latency-svc-gj598 [1.51329756s] May 6 18:20:36.690: INFO: Created: latency-svc-hsw6t May 6 18:20:36.714: INFO: Got endpoints: latency-svc-hsw6t [1.57536375s] May 6 18:20:36.800: INFO: Created: latency-svc-vjrkc May 6 18:20:36.804: INFO: Got endpoints: latency-svc-vjrkc [1.543355839s] May 6 18:20:37.035: INFO: Created: latency-svc-sz45r May 6 18:20:37.064: INFO: Got endpoints: latency-svc-sz45r [1.76279821s] May 6 18:20:37.195: INFO: Created: latency-svc-8z85q May 6 18:20:37.198: INFO: Got endpoints: latency-svc-8z85q [1.791698224s] May 6 18:20:37.280: INFO: Created: latency-svc-jhc7p May 6 18:20:37.386: INFO: Got endpoints: latency-svc-jhc7p [1.9465686s] May 6 18:20:37.445: INFO: Created: latency-svc-gl9z8 May 6 18:20:37.476: INFO: Got endpoints: latency-svc-gl9z8 [1.914658034s] May 6 18:20:38.079: INFO: Created: latency-svc-rkk5q May 6 18:20:38.514: INFO: Created: latency-svc-td2xb May 6 18:20:38.815: INFO: Got endpoints: latency-svc-rkk5q [2.986869845s] May 6 18:20:39.087: INFO: Got endpoints: latency-svc-td2xb [3.059502568s] May 6 18:20:39.092: INFO: Created: latency-svc-nrsth May 6 18:20:39.406: INFO: Got endpoints: latency-svc-nrsth [3.258346535s] May 6 18:20:39.806: INFO: Created: latency-svc-kdxqt May 6 18:20:39.810: INFO: Got endpoints: latency-svc-kdxqt [3.528323673s] May 6 18:20:40.424: INFO: Created: latency-svc-m9cv7 May 6 18:20:40.427: INFO: Got endpoints: latency-svc-m9cv7 [4.110252114s] May 6 18:20:40.734: INFO: Created: latency-svc-2n9zf May 6 18:20:40.742: INFO: Got endpoints: latency-svc-2n9zf [4.383668831s] May 6 18:20:40.890: INFO: Created: latency-svc-9lfs2 May 6 18:20:40.912: INFO: Got endpoints: latency-svc-9lfs2 [4.450862824s] May 6 18:20:40.912: INFO: Created: latency-svc-hsgds May 6 18:20:40.955: INFO: Got endpoints: latency-svc-hsgds [4.416527695s] May 6 18:20:41.058: INFO: Created: latency-svc-q6njm May 6 18:20:41.061: INFO: Got endpoints: latency-svc-q6njm [4.436409737s] May 6 18:20:42.114: INFO: Created: latency-svc-4jzr8 May 6 18:20:42.273: INFO: Got endpoints: latency-svc-4jzr8 [5.5592995s] May 6 18:20:42.531: INFO: Created: latency-svc-76l75 May 6 18:20:42.535: INFO: Got endpoints: latency-svc-76l75 [5.730920379s] May 6 18:20:42.791: INFO: Created: latency-svc-r75t8 May 6 18:20:42.823: INFO: Got endpoints: latency-svc-r75t8 [5.758628806s] May 6 18:20:43.159: INFO: Created: latency-svc-82z2q May 6 18:20:43.176: INFO: Got endpoints: latency-svc-82z2q [5.978706184s] May 6 18:20:43.424: INFO: Created: latency-svc-lhwcp May 6 18:20:43.467: INFO: Got endpoints: latency-svc-lhwcp [6.080213613s] May 6 18:20:43.628: INFO: Created: latency-svc-nkmzw May 6 18:20:43.848: INFO: Got endpoints: latency-svc-nkmzw [6.371358501s] May 6 18:20:43.932: INFO: Created: latency-svc-w8hr9 May 6 18:20:43.947: INFO: Got endpoints: latency-svc-w8hr9 [5.13157697s] May 6 18:20:44.042: INFO: Created: latency-svc-qpvj9 May 6 18:20:44.072: INFO: Got endpoints: latency-svc-qpvj9 [4.984152453s] May 6 18:20:44.141: INFO: Created: latency-svc-rrmhh May 6 18:20:44.145: INFO: Got endpoints: latency-svc-rrmhh [4.739028828s] May 6 18:20:44.238: INFO: Created: latency-svc-qrhc9 May 6 18:20:44.669: INFO: Got endpoints: latency-svc-qrhc9 [4.858288289s] May 6 18:20:44.762: INFO: Created: latency-svc-cnx6k May 6 18:20:44.854: INFO: Got endpoints: latency-svc-cnx6k [4.427170727s] May 6 18:20:44.897: INFO: Created: latency-svc-4w942 May 6 18:20:44.905: INFO: Got endpoints: latency-svc-4w942 [4.162714112s] May 6 18:20:44.947: INFO: Created: latency-svc-j47g5 May 6 18:20:44.992: INFO: Got endpoints: latency-svc-j47g5 [4.080232387s] May 6 18:20:45.006: INFO: Created: latency-svc-jv9bc May 6 18:20:45.014: INFO: Got endpoints: latency-svc-jv9bc [4.058869848s] May 6 18:20:45.051: INFO: Created: latency-svc-6zx69 May 6 18:20:45.074: INFO: Got endpoints: latency-svc-6zx69 [4.013117829s] May 6 18:20:45.228: INFO: Created: latency-svc-fk2j8 May 6 18:20:45.459: INFO: Got endpoints: latency-svc-fk2j8 [3.185150015s] May 6 18:20:45.492: INFO: Created: latency-svc-ng56p May 6 18:20:45.531: INFO: Got endpoints: latency-svc-ng56p [2.996115157s] May 6 18:20:45.708: INFO: Created: latency-svc-4d6x2 May 6 18:20:45.734: INFO: Got endpoints: latency-svc-4d6x2 [2.911057068s] May 6 18:20:45.844: INFO: Created: latency-svc-mtqqv May 6 18:20:45.847: INFO: Got endpoints: latency-svc-mtqqv [2.670279012s] May 6 18:20:45.925: INFO: Created: latency-svc-bpdtm May 6 18:20:46.038: INFO: Got endpoints: latency-svc-bpdtm [2.571447584s] May 6 18:20:46.096: INFO: Created: latency-svc-lcb44 May 6 18:20:46.189: INFO: Got endpoints: latency-svc-lcb44 [2.341589388s] May 6 18:20:46.207: INFO: Created: latency-svc-4bb6k May 6 18:20:46.239: INFO: Got endpoints: latency-svc-4bb6k [2.292856165s] May 6 18:20:46.748: INFO: Created: latency-svc-dldbk May 6 18:20:46.750: INFO: Got endpoints: latency-svc-dldbk [2.678618398s] May 6 18:20:47.046: INFO: Created: latency-svc-5977h May 6 18:20:47.136: INFO: Got endpoints: latency-svc-5977h [2.991256878s] May 6 18:20:47.430: INFO: Created: latency-svc-2kg4g May 6 18:20:47.467: INFO: Got endpoints: latency-svc-2kg4g [2.79781252s] May 6 18:20:47.623: INFO: Created: latency-svc-jlqqp May 6 18:20:47.624: INFO: Got endpoints: latency-svc-jlqqp [2.770497065s] May 6 18:20:47.663: INFO: Created: latency-svc-4flqj May 6 18:20:47.695: INFO: Got endpoints: latency-svc-4flqj [2.790244808s] May 6 18:20:47.829: INFO: Created: latency-svc-l6jv9 May 6 18:20:47.829: INFO: Got endpoints: latency-svc-l6jv9 [2.836595307s] May 6 18:20:47.913: INFO: Created: latency-svc-wlbqf May 6 18:20:47.979: INFO: Got endpoints: latency-svc-wlbqf [2.965042099s] May 6 18:20:48.040: INFO: Created: latency-svc-r4tjd May 6 18:20:48.183: INFO: Got endpoints: latency-svc-r4tjd [3.108880957s] May 6 18:20:48.191: INFO: Created: latency-svc-vj7tn May 6 18:20:48.206: INFO: Got endpoints: latency-svc-vj7tn [2.747385657s] May 6 18:20:48.251: INFO: Created: latency-svc-z26xz May 6 18:20:48.272: INFO: Got endpoints: latency-svc-z26xz [2.740864142s] May 6 18:20:48.393: INFO: Created: latency-svc-cpl54 May 6 18:20:48.396: INFO: Got endpoints: latency-svc-cpl54 [2.662092501s] May 6 18:20:48.485: INFO: Created: latency-svc-24rb9 May 6 18:20:48.610: INFO: Got endpoints: latency-svc-24rb9 [2.763555999s] May 6 18:20:48.662: INFO: Created: latency-svc-skbrp May 6 18:20:49.225: INFO: Got endpoints: latency-svc-skbrp [3.186734114s] May 6 18:20:49.549: INFO: Created: latency-svc-6qgvf May 6 18:20:50.010: INFO: Created: latency-svc-s5c9n May 6 18:20:50.285: INFO: Created: latency-svc-4qbwq May 6 18:20:50.285: INFO: Got endpoints: latency-svc-6qgvf [4.095711881s] May 6 18:20:50.334: INFO: Got endpoints: latency-svc-4qbwq [3.58362041s] May 6 18:20:50.520: INFO: Got endpoints: latency-svc-s5c9n [4.280575864s] May 6 18:20:50.521: INFO: Created: latency-svc-fsw5v May 6 18:20:50.670: INFO: Got endpoints: latency-svc-fsw5v [3.533821948s] May 6 18:20:51.112: INFO: Created: latency-svc-zw94w May 6 18:20:51.116: INFO: Got endpoints: latency-svc-zw94w [3.649435312s] May 6 18:20:51.980: INFO: Created: latency-svc-lp62r May 6 18:20:52.060: INFO: Got endpoints: latency-svc-lp62r [4.435233438s] May 6 18:20:52.467: INFO: Created: latency-svc-wksvc May 6 18:20:52.668: INFO: Got endpoints: latency-svc-wksvc [4.972491061s] May 6 18:20:53.095: INFO: Created: latency-svc-26m2c May 6 18:20:53.440: INFO: Got endpoints: latency-svc-26m2c [5.610888392s] May 6 18:20:53.526: INFO: Created: latency-svc-qpqcg May 6 18:20:54.010: INFO: Got endpoints: latency-svc-qpqcg [6.030656317s] May 6 18:20:54.384: INFO: Created: latency-svc-sfbcr May 6 18:20:54.842: INFO: Got endpoints: latency-svc-sfbcr [6.658830315s] May 6 18:20:55.629: INFO: Created: latency-svc-zhxzh May 6 18:20:56.238: INFO: Got endpoints: latency-svc-zhxzh [8.031455652s] May 6 18:20:56.737: INFO: Created: latency-svc-w66xk May 6 18:20:57.010: INFO: Got endpoints: latency-svc-w66xk [8.738065049s] May 6 18:20:57.292: INFO: Created: latency-svc-vz56h May 6 18:20:57.594: INFO: Got endpoints: latency-svc-vz56h [9.19823958s] May 6 18:20:57.955: INFO: Created: latency-svc-hfd8m May 6 18:20:57.974: INFO: Got endpoints: latency-svc-hfd8m [9.363268795s] May 6 18:20:58.430: INFO: Created: latency-svc-59w57 May 6 18:20:58.656: INFO: Got endpoints: latency-svc-59w57 [9.430884614s] May 6 18:20:58.690: INFO: Created: latency-svc-hzbv4 May 6 18:20:58.698: INFO: Got endpoints: latency-svc-hzbv4 [8.412646314s] May 6 18:20:58.957: INFO: Created: latency-svc-pq9ps May 6 18:20:59.281: INFO: Created: latency-svc-nlm7n May 6 18:20:59.281: INFO: Got endpoints: latency-svc-pq9ps [8.94678073s] May 6 18:20:59.285: INFO: Got endpoints: latency-svc-nlm7n [8.765269678s] May 6 18:20:59.457: INFO: Created: latency-svc-bgngc May 6 18:20:59.699: INFO: Got endpoints: latency-svc-bgngc [9.02908947s] May 6 18:20:59.740: INFO: Created: latency-svc-bcbgg May 6 18:20:59.764: INFO: Got endpoints: latency-svc-bcbgg [8.64764468s] May 6 18:20:59.848: INFO: Created: latency-svc-p46sm May 6 18:20:59.850: INFO: Got endpoints: latency-svc-p46sm [7.790683585s] May 6 18:21:00.065: INFO: Created: latency-svc-slc4j May 6 18:21:00.098: INFO: Got endpoints: latency-svc-slc4j [7.429471479s] May 6 18:21:00.126: INFO: Created: latency-svc-hzs8j May 6 18:21:00.130: INFO: Got endpoints: latency-svc-hzs8j [6.690275569s] May 6 18:21:00.164: INFO: Created: latency-svc-wx9hv May 6 18:21:00.291: INFO: Got endpoints: latency-svc-wx9hv [6.280743327s] May 6 18:21:00.330: INFO: Created: latency-svc-fjkld May 6 18:21:00.377: INFO: Got endpoints: latency-svc-fjkld [5.535024466s] May 6 18:21:00.453: INFO: Created: latency-svc-gzbjx May 6 18:21:00.460: INFO: Got endpoints: latency-svc-gzbjx [4.22265581s] May 6 18:21:00.516: INFO: Created: latency-svc-4b9dw May 6 18:21:00.657: INFO: Created: latency-svc-wnwrs May 6 18:21:00.812: INFO: Got endpoints: latency-svc-4b9dw [3.801808423s] May 6 18:21:00.813: INFO: Created: latency-svc-llbhg May 6 18:21:00.845: INFO: Got endpoints: latency-svc-llbhg [2.870828677s] May 6 18:21:00.901: INFO: Got endpoints: latency-svc-wnwrs [3.306562245s] May 6 18:21:00.901: INFO: Created: latency-svc-fxvqs May 6 18:21:00.968: INFO: Got endpoints: latency-svc-fxvqs [2.311466015s] May 6 18:21:00.991: INFO: Created: latency-svc-467jg May 6 18:21:01.001: INFO: Got endpoints: latency-svc-467jg [2.303545492s] May 6 18:21:01.046: INFO: Created: latency-svc-qqs8t May 6 18:21:01.062: INFO: Got endpoints: latency-svc-qqs8t [1.780994783s] May 6 18:21:01.183: INFO: Created: latency-svc-w69cc May 6 18:21:01.207: INFO: Got endpoints: latency-svc-w69cc [1.921116789s] May 6 18:21:01.286: INFO: Created: latency-svc-75545 May 6 18:21:01.303: INFO: Got endpoints: latency-svc-75545 [1.603457332s] May 6 18:21:01.337: INFO: Created: latency-svc-c2hzd May 6 18:21:01.350: INFO: Got endpoints: latency-svc-c2hzd [1.585943478s] May 6 18:21:01.422: INFO: Created: latency-svc-mrpbp May 6 18:21:01.428: INFO: Got endpoints: latency-svc-mrpbp [1.577829867s] May 6 18:21:01.482: INFO: Created: latency-svc-8vfqq May 6 18:21:01.494: INFO: Got endpoints: latency-svc-8vfqq [1.396723031s] May 6 18:21:01.552: INFO: Created: latency-svc-xd4wv May 6 18:21:01.567: INFO: Got endpoints: latency-svc-xd4wv [1.436272433s] May 6 18:21:01.620: INFO: Created: latency-svc-dq9ss May 6 18:21:01.633: INFO: Got endpoints: latency-svc-dq9ss [1.342379128s] May 6 18:21:01.690: INFO: Created: latency-svc-57jjf May 6 18:21:01.706: INFO: Got endpoints: latency-svc-57jjf [1.328640338s] May 6 18:21:01.738: INFO: Created: latency-svc-q5pvz May 6 18:21:01.754: INFO: Got endpoints: latency-svc-q5pvz [1.293682739s] May 6 18:21:01.836: INFO: Created: latency-svc-vhg9t May 6 18:21:01.844: INFO: Got endpoints: latency-svc-vhg9t [1.032189334s] May 6 18:21:01.870: INFO: Created: latency-svc-k25js May 6 18:21:01.886: INFO: Got endpoints: latency-svc-k25js [1.041405017s] May 6 18:21:01.992: INFO: Created: latency-svc-8wl4j May 6 18:21:02.038: INFO: Got endpoints: latency-svc-8wl4j [1.137238642s] May 6 18:21:02.154: INFO: Created: latency-svc-cm64w May 6 18:21:02.156: INFO: Got endpoints: latency-svc-cm64w [1.188767779s] May 6 18:21:02.369: INFO: Created: latency-svc-6nw8t May 6 18:21:02.409: INFO: Got endpoints: latency-svc-6nw8t [1.408069594s] May 6 18:21:02.462: INFO: Created: latency-svc-j6shx May 6 18:21:02.573: INFO: Got endpoints: latency-svc-j6shx [1.511039984s] May 6 18:21:02.602: INFO: Created: latency-svc-klp92 May 6 18:21:02.625: INFO: Got endpoints: latency-svc-klp92 [1.418312702s] May 6 18:21:02.725: INFO: Created: latency-svc-8qhhr May 6 18:21:02.739: INFO: Got endpoints: latency-svc-8qhhr [1.436472514s] May 6 18:21:03.190: INFO: Created: latency-svc-48rh5 May 6 18:21:03.193: INFO: Got endpoints: latency-svc-48rh5 [1.843134343s] May 6 18:21:03.489: INFO: Created: latency-svc-x4lzq May 6 18:21:03.491: INFO: Got endpoints: latency-svc-x4lzq [2.062643213s] May 6 18:21:03.729: INFO: Created: latency-svc-9tbf8 May 6 18:21:03.770: INFO: Got endpoints: latency-svc-9tbf8 [2.275643699s] May 6 18:21:04.148: INFO: Created: latency-svc-ldnbj May 6 18:21:04.150: INFO: Got endpoints: latency-svc-ldnbj [2.583244812s] May 6 18:21:04.219: INFO: Created: latency-svc-5bx75 May 6 18:21:04.434: INFO: Got endpoints: latency-svc-5bx75 [2.800917608s] May 6 18:21:04.475: INFO: Created: latency-svc-dj5jl May 6 18:21:04.746: INFO: Got endpoints: latency-svc-dj5jl [3.040412418s] May 6 18:21:05.785: INFO: Created: latency-svc-k9ff9 May 6 18:21:06.028: INFO: Got endpoints: latency-svc-k9ff9 [4.273625607s] May 6 18:21:06.299: INFO: Created: latency-svc-fcmsv May 6 18:21:06.318: INFO: Got endpoints: latency-svc-fcmsv [4.473573267s] May 6 18:21:06.507: INFO: Created: latency-svc-26tfw May 6 18:21:06.597: INFO: Got endpoints: latency-svc-26tfw [4.710663169s] May 6 18:21:06.600: INFO: Created: latency-svc-vs6tk May 6 18:21:06.682: INFO: Got endpoints: latency-svc-vs6tk [4.64312347s] May 6 18:21:06.728: INFO: Created: latency-svc-qwfqw May 6 18:21:06.737: INFO: Got endpoints: latency-svc-qwfqw [4.580576491s] May 6 18:21:06.840: INFO: Created: latency-svc-ffp6g May 6 18:21:06.871: INFO: Got endpoints: latency-svc-ffp6g [4.461027199s] May 6 18:21:06.957: INFO: Created: latency-svc-dd6wf May 6 18:21:06.966: INFO: Got endpoints: latency-svc-dd6wf [4.392645216s] May 6 18:21:07.011: INFO: Created: latency-svc-qzrzf May 6 18:21:07.044: INFO: Got endpoints: latency-svc-qzrzf [4.418773617s] May 6 18:21:07.126: INFO: Created: latency-svc-6lx5g May 6 18:21:07.158: INFO: Got endpoints: latency-svc-6lx5g [4.418554808s] May 6 18:21:07.328: INFO: Created: latency-svc-lwx4m May 6 18:21:07.346: INFO: Got endpoints: latency-svc-lwx4m [4.152499777s] May 6 18:21:07.373: INFO: Created: latency-svc-4f9gv May 6 18:21:07.398: INFO: Got endpoints: latency-svc-4f9gv [3.907067199s] May 6 18:21:07.502: INFO: Created: latency-svc-9rr4g May 6 18:21:07.560: INFO: Got endpoints: latency-svc-9rr4g [3.79014454s] May 6 18:21:07.650: INFO: Created: latency-svc-5g4pj May 6 18:21:07.652: INFO: Got endpoints: latency-svc-5g4pj [3.502322497s] May 6 18:21:07.718: INFO: Created: latency-svc-cz26z May 6 18:21:07.734: INFO: Got endpoints: latency-svc-cz26z [3.300097907s] May 6 18:21:07.848: INFO: Created: latency-svc-t7dtx May 6 18:21:07.867: INFO: Got endpoints: latency-svc-t7dtx [3.120437795s] May 6 18:21:08.156: INFO: Created: latency-svc-pzcm9 May 6 18:21:08.192: INFO: Got endpoints: latency-svc-pzcm9 [2.164435561s] May 6 18:21:08.246: INFO: Created: latency-svc-cdsdb May 6 18:21:08.374: INFO: Got endpoints: latency-svc-cdsdb [2.056632344s] May 6 18:21:09.132: INFO: Created: latency-svc-bztcm May 6 18:21:09.174: INFO: Got endpoints: latency-svc-bztcm [2.577031296s] May 6 18:21:09.549: INFO: Created: latency-svc-lhzpj May 6 18:21:09.611: INFO: Got endpoints: latency-svc-lhzpj [2.929191448s] May 6 18:21:09.941: INFO: Created: latency-svc-5vs9z May 6 18:21:10.159: INFO: Got endpoints: latency-svc-5vs9z [3.422246214s] May 6 18:21:10.164: INFO: Created: latency-svc-9xbqf May 6 18:21:10.181: INFO: Got endpoints: latency-svc-9xbqf [3.310302401s] May 6 18:21:10.387: INFO: Created: latency-svc-mf7ph May 6 18:21:10.639: INFO: Got endpoints: latency-svc-mf7ph [3.672975822s] May 6 18:21:10.643: INFO: Created: latency-svc-h7pbb May 6 18:21:10.708: INFO: Got endpoints: latency-svc-h7pbb [3.663998563s] May 6 18:21:11.334: INFO: Created: latency-svc-sx22n May 6 18:21:11.890: INFO: Created: latency-svc-vhhcb May 6 18:21:12.139: INFO: Got endpoints: latency-svc-sx22n [4.980480857s] May 6 18:21:12.218: INFO: Got endpoints: latency-svc-vhhcb [4.872635799s] May 6 18:21:12.683: INFO: Created: latency-svc-9wk9n May 6 18:21:12.744: INFO: Got endpoints: latency-svc-9wk9n [5.346292631s] May 6 18:21:12.744: INFO: Latencies: [65.842447ms 98.699212ms 146.11282ms 261.501985ms 693.413768ms 759.0997ms 793.045574ms 799.773967ms 837.643272ms 868.462297ms 877.672485ms 882.907017ms 901.653072ms 906.533888ms 947.435745ms 961.370154ms 1.021960812s 1.032189334s 1.041405017s 1.121735837s 1.137238642s 1.142859829s 1.160393624s 1.188767779s 1.234767628s 1.244818944s 1.293682739s 1.294707998s 1.299288599s 1.299732734s 1.304209853s 1.316595125s 1.328640338s 1.338991885s 1.342379128s 1.355198846s 1.371561154s 1.396723031s 1.408069594s 1.418312702s 1.420263762s 1.436272433s 1.436472514s 1.509199526s 1.511039984s 1.51329756s 1.520316288s 1.532551688s 1.543355839s 1.545050137s 1.556334334s 1.57536375s 1.577829867s 1.585943478s 1.586929899s 1.593236272s 1.603457332s 1.603609853s 1.65409131s 1.661841776s 1.664964595s 1.717033488s 1.74206008s 1.756413765s 1.76279821s 1.780994783s 1.784112926s 1.790326741s 1.791698224s 1.794857433s 1.819887257s 1.822868708s 1.843134343s 1.844322114s 1.914658034s 1.921116789s 1.9465686s 1.965995909s 2.056632344s 2.062643213s 2.164435561s 2.275643699s 2.292856165s 2.303545492s 2.311466015s 2.341589388s 2.409423574s 2.481885219s 2.571447584s 2.577031296s 2.583244812s 2.662092501s 2.670279012s 2.678618398s 2.740864142s 2.747385657s 2.763555999s 2.770497065s 2.790244808s 2.79781252s 2.800917608s 2.836595307s 2.870828677s 2.911057068s 2.929191448s 2.950518562s 2.951536117s 2.965042099s 2.986869845s 2.991256878s 2.996115157s 3.029010443s 3.040412418s 3.059502568s 3.078244956s 3.108880957s 3.117030686s 3.120437795s 3.185150015s 3.186734114s 3.258346535s 3.300097907s 3.306562245s 3.310302401s 3.422246214s 3.502322497s 3.528323673s 3.533821948s 3.58362041s 3.649435312s 3.663998563s 3.672975822s 3.689732824s 3.706438638s 3.722837275s 3.72507256s 3.761470989s 3.767394883s 3.767754089s 3.79014454s 3.801808423s 3.828429134s 3.830072167s 3.907067199s 4.013117829s 4.058869848s 4.080232387s 4.095711881s 4.110252114s 4.152499777s 4.162714112s 4.22265581s 4.273625607s 4.280575864s 4.383668831s 4.392645216s 4.416527695s 4.418554808s 4.418773617s 4.427170727s 4.435233438s 4.436409737s 4.450862824s 4.461027199s 4.473573267s 4.580576491s 4.64312347s 4.710663169s 4.739028828s 4.858288289s 4.872635799s 4.972491061s 4.980480857s 4.984152453s 5.13157697s 5.346292631s 5.535024466s 5.5592995s 5.610888392s 5.730920379s 5.758628806s 5.978706184s 6.030656317s 6.080213613s 6.280743327s 6.371358501s 6.658830315s 6.690275569s 7.429471479s 7.790683585s 8.031455652s 8.412646314s 8.64764468s 8.738065049s 8.765269678s 8.94678073s 9.02908947s 9.19823958s 9.363268795s 9.430884614s] May 6 18:21:12.745: INFO: 50 %ile: 2.800917608s May 6 18:21:12.745: INFO: 90 %ile: 5.758628806s May 6 18:21:12.745: INFO: 99 %ile: 9.363268795s May 6 18:21:12.745: INFO: Total sample count: 200 [AfterEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 18:21:12.745: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-svc-latency-z7pgf" for this suite. May 6 18:22:35.246: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 18:22:35.277: INFO: namespace: e2e-tests-svc-latency-z7pgf, resource: bindings, ignored listing per whitelist May 6 18:22:35.312: INFO: namespace e2e-tests-svc-latency-z7pgf deletion completed in 1m22.140151559s • [SLOW TEST:132.758 seconds] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-network] DNS should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 18:22:35.312: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default;check="$$(dig +tcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;test -n "$$(getent hosts dns-querier-1.dns-test-service.e2e-tests-dns-t6lts.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-t6lts.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-t6lts.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default;check="$$(dig +tcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;test -n "$$(getent hosts dns-querier-1.dns-test-service.e2e-tests-dns-t6lts.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-t6lts.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-t6lts.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 6 18:22:52.011: INFO: DNS probes using e2e-tests-dns-t6lts/dns-test-8f30a0d5-8fc6-11ea-a618-0242ac110019 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 18:22:52.138: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-dns-t6lts" for this suite. May 6 18:22:58.514: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 18:22:58.544: INFO: namespace: e2e-tests-dns-t6lts, resource: bindings, ignored listing per whitelist May 6 18:22:58.582: INFO: namespace e2e-tests-dns-t6lts deletion completed in 6.42865743s • [SLOW TEST:23.270 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 18:22:58.582: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with secret that has name projected-secret-test-9d0e553a-8fc6-11ea-a618-0242ac110019 STEP: Creating a pod to test consume secrets May 6 18:22:59.275: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-9d12b290-8fc6-11ea-a618-0242ac110019" in namespace "e2e-tests-projected-bgtf4" to be "success or failure" May 6 18:22:59.287: INFO: Pod "pod-projected-secrets-9d12b290-8fc6-11ea-a618-0242ac110019": Phase="Pending", Reason="", readiness=false. Elapsed: 11.827712ms May 6 18:23:01.528: INFO: Pod "pod-projected-secrets-9d12b290-8fc6-11ea-a618-0242ac110019": Phase="Pending", Reason="", readiness=false. Elapsed: 2.252220354s May 6 18:23:03.532: INFO: Pod "pod-projected-secrets-9d12b290-8fc6-11ea-a618-0242ac110019": Phase="Pending", Reason="", readiness=false. Elapsed: 4.25668033s May 6 18:23:05.536: INFO: Pod "pod-projected-secrets-9d12b290-8fc6-11ea-a618-0242ac110019": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.260410589s STEP: Saw pod success May 6 18:23:05.536: INFO: Pod "pod-projected-secrets-9d12b290-8fc6-11ea-a618-0242ac110019" satisfied condition "success or failure" May 6 18:23:05.538: INFO: Trying to get logs from node hunter-worker2 pod pod-projected-secrets-9d12b290-8fc6-11ea-a618-0242ac110019 container projected-secret-volume-test: STEP: delete the pod May 6 18:23:05.755: INFO: Waiting for pod pod-projected-secrets-9d12b290-8fc6-11ea-a618-0242ac110019 to disappear May 6 18:23:05.827: INFO: Pod pod-projected-secrets-9d12b290-8fc6-11ea-a618-0242ac110019 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 18:23:05.827: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-bgtf4" for this suite. May 6 18:23:13.968: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 18:23:14.003: INFO: namespace: e2e-tests-projected-bgtf4, resource: bindings, ignored listing per whitelist May 6 18:23:14.044: INFO: namespace e2e-tests-projected-bgtf4 deletion completed in 8.213687533s • [SLOW TEST:15.463 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 18:23:14.045: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0644 on node default medium May 6 18:23:14.147: INFO: Waiting up to 5m0s for pod "pod-a61a3acd-8fc6-11ea-a618-0242ac110019" in namespace "e2e-tests-emptydir-4xrp5" to be "success or failure" May 6 18:23:14.198: INFO: Pod "pod-a61a3acd-8fc6-11ea-a618-0242ac110019": Phase="Pending", Reason="", readiness=false. Elapsed: 50.783254ms May 6 18:23:16.202: INFO: Pod "pod-a61a3acd-8fc6-11ea-a618-0242ac110019": Phase="Pending", Reason="", readiness=false. Elapsed: 2.055077502s May 6 18:23:18.205: INFO: Pod "pod-a61a3acd-8fc6-11ea-a618-0242ac110019": Phase="Running", Reason="", readiness=true. Elapsed: 4.058654341s May 6 18:23:20.209: INFO: Pod "pod-a61a3acd-8fc6-11ea-a618-0242ac110019": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.062654779s STEP: Saw pod success May 6 18:23:20.210: INFO: Pod "pod-a61a3acd-8fc6-11ea-a618-0242ac110019" satisfied condition "success or failure" May 6 18:23:20.212: INFO: Trying to get logs from node hunter-worker pod pod-a61a3acd-8fc6-11ea-a618-0242ac110019 container test-container: STEP: delete the pod May 6 18:23:20.244: INFO: Waiting for pod pod-a61a3acd-8fc6-11ea-a618-0242ac110019 to disappear May 6 18:23:20.255: INFO: Pod pod-a61a3acd-8fc6-11ea-a618-0242ac110019 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 18:23:20.255: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-4xrp5" for this suite. May 6 18:23:28.310: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 18:23:28.354: INFO: namespace: e2e-tests-emptydir-4xrp5, resource: bindings, ignored listing per whitelist May 6 18:23:28.383: INFO: namespace e2e-tests-emptydir-4xrp5 deletion completed in 8.124941647s • [SLOW TEST:14.338 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-apps] ReplicationController should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 18:23:28.384: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Given a ReplicationController is created STEP: When the matched label of one of its pods change May 6 18:23:29.798: INFO: Pod name pod-release: Found 0 pods out of 1 May 6 18:23:34.802: INFO: Pod name pod-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 18:23:36.018: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-replication-controller-6xzdr" for this suite. May 6 18:23:44.076: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 18:23:44.095: INFO: namespace: e2e-tests-replication-controller-6xzdr, resource: bindings, ignored listing per whitelist May 6 18:23:44.138: INFO: namespace e2e-tests-replication-controller-6xzdr deletion completed in 8.116250246s • [SLOW TEST:15.755 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 18:23:44.138: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods STEP: Gathering metrics W0506 18:24:26.124581 6 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 6 18:24:26.124: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 18:24:26.124: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-wf7gz" for this suite. May 6 18:24:34.148: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 18:24:34.210: INFO: namespace: e2e-tests-gc-wf7gz, resource: bindings, ignored listing per whitelist May 6 18:24:34.210: INFO: namespace e2e-tests-gc-wf7gz deletion completed in 8.082120395s • [SLOW TEST:50.071 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 18:24:34.210: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-77257 STEP: creating a selector STEP: Creating the service pods in kubernetes May 6 18:24:34.594: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods May 6 18:25:07.124: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.119:8080/dial?request=hostName&protocol=http&host=10.244.1.118&port=8080&tries=1'] Namespace:e2e-tests-pod-network-test-77257 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 6 18:25:07.124: INFO: >>> kubeConfig: /root/.kube/config I0506 18:25:07.243654 6 log.go:172] (0xc0007c5ad0) (0xc001f226e0) Create stream I0506 18:25:07.243682 6 log.go:172] (0xc0007c5ad0) (0xc001f226e0) Stream added, broadcasting: 1 I0506 18:25:07.245509 6 log.go:172] (0xc0007c5ad0) Reply frame received for 1 I0506 18:25:07.245537 6 log.go:172] (0xc0007c5ad0) (0xc00203a000) Create stream I0506 18:25:07.245548 6 log.go:172] (0xc0007c5ad0) (0xc00203a000) Stream added, broadcasting: 3 I0506 18:25:07.246499 6 log.go:172] (0xc0007c5ad0) Reply frame received for 3 I0506 18:25:07.246541 6 log.go:172] (0xc0007c5ad0) (0xc001f22780) Create stream I0506 18:25:07.246551 6 log.go:172] (0xc0007c5ad0) (0xc001f22780) Stream added, broadcasting: 5 I0506 18:25:07.247374 6 log.go:172] (0xc0007c5ad0) Reply frame received for 5 I0506 18:25:07.299096 6 log.go:172] (0xc0007c5ad0) Data frame received for 3 I0506 18:25:07.299132 6 log.go:172] (0xc00203a000) (3) Data frame handling I0506 18:25:07.299183 6 log.go:172] (0xc00203a000) (3) Data frame sent I0506 18:25:07.300218 6 log.go:172] (0xc0007c5ad0) Data frame received for 5 I0506 18:25:07.300268 6 log.go:172] (0xc001f22780) (5) Data frame handling I0506 18:25:07.300927 6 log.go:172] (0xc0007c5ad0) Data frame received for 3 I0506 18:25:07.300958 6 log.go:172] (0xc00203a000) (3) Data frame handling I0506 18:25:07.302792 6 log.go:172] (0xc0007c5ad0) Data frame received for 1 I0506 18:25:07.302825 6 log.go:172] (0xc001f226e0) (1) Data frame handling I0506 18:25:07.302843 6 log.go:172] (0xc001f226e0) (1) Data frame sent I0506 18:25:07.302863 6 log.go:172] (0xc0007c5ad0) (0xc001f226e0) Stream removed, broadcasting: 1 I0506 18:25:07.302885 6 log.go:172] (0xc0007c5ad0) Go away received I0506 18:25:07.303128 6 log.go:172] (0xc0007c5ad0) (0xc001f226e0) Stream removed, broadcasting: 1 I0506 18:25:07.303151 6 log.go:172] (0xc0007c5ad0) (0xc00203a000) Stream removed, broadcasting: 3 I0506 18:25:07.303163 6 log.go:172] (0xc0007c5ad0) (0xc001f22780) Stream removed, broadcasting: 5 May 6 18:25:07.303: INFO: Waiting for endpoints: map[] May 6 18:25:07.306: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.119:8080/dial?request=hostName&protocol=http&host=10.244.2.83&port=8080&tries=1'] Namespace:e2e-tests-pod-network-test-77257 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 6 18:25:07.306: INFO: >>> kubeConfig: /root/.kube/config I0506 18:25:07.339920 6 log.go:172] (0xc0002fb8c0) (0xc00203a5a0) Create stream I0506 18:25:07.339957 6 log.go:172] (0xc0002fb8c0) (0xc00203a5a0) Stream added, broadcasting: 1 I0506 18:25:07.342577 6 log.go:172] (0xc0002fb8c0) Reply frame received for 1 I0506 18:25:07.342615 6 log.go:172] (0xc0002fb8c0) (0xc0020b2140) Create stream I0506 18:25:07.342631 6 log.go:172] (0xc0002fb8c0) (0xc0020b2140) Stream added, broadcasting: 3 I0506 18:25:07.343786 6 log.go:172] (0xc0002fb8c0) Reply frame received for 3 I0506 18:25:07.343818 6 log.go:172] (0xc0002fb8c0) (0xc00203a640) Create stream I0506 18:25:07.343831 6 log.go:172] (0xc0002fb8c0) (0xc00203a640) Stream added, broadcasting: 5 I0506 18:25:07.344812 6 log.go:172] (0xc0002fb8c0) Reply frame received for 5 I0506 18:25:07.496222 6 log.go:172] (0xc0002fb8c0) Data frame received for 3 I0506 18:25:07.496245 6 log.go:172] (0xc0020b2140) (3) Data frame handling I0506 18:25:07.496257 6 log.go:172] (0xc0020b2140) (3) Data frame sent I0506 18:25:07.496729 6 log.go:172] (0xc0002fb8c0) Data frame received for 5 I0506 18:25:07.496758 6 log.go:172] (0xc0002fb8c0) Data frame received for 3 I0506 18:25:07.496792 6 log.go:172] (0xc0020b2140) (3) Data frame handling I0506 18:25:07.496808 6 log.go:172] (0xc00203a640) (5) Data frame handling I0506 18:25:07.498300 6 log.go:172] (0xc0002fb8c0) Data frame received for 1 I0506 18:25:07.498351 6 log.go:172] (0xc00203a5a0) (1) Data frame handling I0506 18:25:07.498391 6 log.go:172] (0xc00203a5a0) (1) Data frame sent I0506 18:25:07.498416 6 log.go:172] (0xc0002fb8c0) (0xc00203a5a0) Stream removed, broadcasting: 1 I0506 18:25:07.498435 6 log.go:172] (0xc0002fb8c0) Go away received I0506 18:25:07.498531 6 log.go:172] (0xc0002fb8c0) (0xc00203a5a0) Stream removed, broadcasting: 1 I0506 18:25:07.498565 6 log.go:172] (0xc0002fb8c0) (0xc0020b2140) Stream removed, broadcasting: 3 I0506 18:25:07.498578 6 log.go:172] (0xc0002fb8c0) (0xc00203a640) Stream removed, broadcasting: 5 May 6 18:25:07.498: INFO: Waiting for endpoints: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 18:25:07.498: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pod-network-test-77257" for this suite. May 6 18:25:34.671: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 18:25:35.236: INFO: namespace: e2e-tests-pod-network-test-77257, resource: bindings, ignored listing per whitelist May 6 18:25:35.830: INFO: namespace e2e-tests-pod-network-test-77257 deletion completed in 28.328338156s • [SLOW TEST:61.620 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 18:25:35.831: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65 [It] deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 6 18:25:37.662: INFO: Pod name rollover-pod: Found 0 pods out of 1 May 6 18:25:42.678: INFO: Pod name rollover-pod: Found 1 pods out of 1 STEP: ensuring each pod is running May 6 18:25:46.685: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready May 6 18:25:48.689: INFO: Creating deployment "test-rollover-deployment" May 6 18:25:48.714: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations May 6 18:25:50.722: INFO: Check revision of new replica set for deployment "test-rollover-deployment" May 6 18:25:50.728: INFO: Ensure that both replica sets have 1 created replica May 6 18:25:50.733: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update May 6 18:25:50.739: INFO: Updating deployment test-rollover-deployment May 6 18:25:50.739: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller May 6 18:25:52.796: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 May 6 18:25:52.801: INFO: Make sure deployment "test-rollover-deployment" is complete May 6 18:25:52.807: INFO: all replica sets need to contain the pod-template-hash label May 6 18:25:52.807: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724386348, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724386348, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724386351, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724386348, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} May 6 18:25:54.813: INFO: all replica sets need to contain the pod-template-hash label May 6 18:25:54.813: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724386348, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724386348, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724386351, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724386348, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} May 6 18:25:56.815: INFO: all replica sets need to contain the pod-template-hash label May 6 18:25:56.815: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724386348, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724386348, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724386355, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724386348, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} May 6 18:25:58.813: INFO: all replica sets need to contain the pod-template-hash label May 6 18:25:58.813: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724386348, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724386348, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724386355, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724386348, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} May 6 18:26:01.029: INFO: all replica sets need to contain the pod-template-hash label May 6 18:26:01.029: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724386348, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724386348, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724386355, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724386348, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} May 6 18:26:02.815: INFO: all replica sets need to contain the pod-template-hash label May 6 18:26:02.815: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724386348, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724386348, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724386355, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724386348, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} May 6 18:26:05.016: INFO: all replica sets need to contain the pod-template-hash label May 6 18:26:05.016: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724386348, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724386348, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724386355, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724386348, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} May 6 18:26:07.010: INFO: May 6 18:26:07.010: INFO: Ensure that both old replica sets have no replicas [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59 May 6 18:26:07.020: INFO: Deployment "test-rollover-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment,GenerateName:,Namespace:e2e-tests-deployment-hc6q8,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-hc6q8/deployments/test-rollover-deployment,UID:023affb4-8fc7-11ea-99e8-0242ac110002,ResourceVersion:9096365,Generation:2,CreationTimestamp:2020-05-06 18:25:48 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-05-06 18:25:48 +0000 UTC 2020-05-06 18:25:48 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-05-06 18:26:05 +0000 UTC 2020-05-06 18:25:48 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rollover-deployment-5b8479fdb6" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},} May 6 18:26:07.024: INFO: New ReplicaSet "test-rollover-deployment-5b8479fdb6" of Deployment "test-rollover-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-5b8479fdb6,GenerateName:,Namespace:e2e-tests-deployment-hc6q8,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-hc6q8/replicasets/test-rollover-deployment-5b8479fdb6,UID:0373c8d5-8fc7-11ea-99e8-0242ac110002,ResourceVersion:9096355,Generation:2,CreationTimestamp:2020-05-06 18:25:50 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 023affb4-8fc7-11ea-99e8-0242ac110002 0xc0019711d7 0xc0019711d8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} May 6 18:26:07.024: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": May 6 18:26:07.024: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-controller,GenerateName:,Namespace:e2e-tests-deployment-hc6q8,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-hc6q8/replicasets/test-rollover-controller,UID:fb40847b-8fc6-11ea-99e8-0242ac110002,ResourceVersion:9096364,Generation:2,CreationTimestamp:2020-05-06 18:25:36 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 023affb4-8fc7-11ea-99e8-0242ac110002 0xc0019706f7 0xc0019706f8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} May 6 18:26:07.024: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-58494b7559,GenerateName:,Namespace:e2e-tests-deployment-hc6q8,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-hc6q8/replicasets/test-rollover-deployment-58494b7559,UID:023ffc71-8fc7-11ea-99e8-0242ac110002,ResourceVersion:9096319,Generation:2,CreationTimestamp:2020-05-06 18:25:48 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 58494b7559,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 023affb4-8fc7-11ea-99e8-0242ac110002 0xc001970e67 0xc001970e68}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 58494b7559,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 58494b7559,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} May 6 18:26:07.027: INFO: Pod "test-rollover-deployment-5b8479fdb6-8fc68" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-5b8479fdb6-8fc68,GenerateName:test-rollover-deployment-5b8479fdb6-,Namespace:e2e-tests-deployment-hc6q8,SelfLink:/api/v1/namespaces/e2e-tests-deployment-hc6q8/pods/test-rollover-deployment-5b8479fdb6-8fc68,UID:0394f0c7-8fc7-11ea-99e8-0242ac110002,ResourceVersion:9096333,Generation:0,CreationTimestamp:2020-05-06 18:25:50 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rollover-deployment-5b8479fdb6 0373c8d5-8fc7-11ea-99e8-0242ac110002 0xc001e0ce97 0xc001e0ce98}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-69698 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-69698,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-69698 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001e0cf10} {node.kubernetes.io/unreachable Exists NoExecute 0xc001e0cf30}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 18:25:51 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 18:25:55 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 18:25:55 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 18:25:50 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.4,PodIP:10.244.2.84,StartTime:2020-05-06 18:25:51 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-05-06 18:25:54 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 containerd://3604f800816efc1cd0307df19895974ca936afa5b264da01fd519be203338993}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 18:26:07.028: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-deployment-hc6q8" for this suite. May 6 18:26:15.180: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 18:26:15.250: INFO: namespace: e2e-tests-deployment-hc6q8, resource: bindings, ignored listing per whitelist May 6 18:26:15.272: INFO: namespace e2e-tests-deployment-hc6q8 deletion completed in 8.241103221s • [SLOW TEST:39.441 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 18:26:15.272: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65 [It] RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 6 18:26:15.390: INFO: Creating deployment "test-recreate-deployment" May 6 18:26:15.394: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1 May 6 18:26:15.472: INFO: deployment "test-recreate-deployment" doesn't have the required revision set May 6 18:26:17.477: INFO: Waiting deployment "test-recreate-deployment" to complete May 6 18:26:17.478: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724386375, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724386375, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724386375, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724386375, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)} May 6 18:26:19.482: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724386375, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724386375, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724386375, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724386375, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)} May 6 18:26:21.482: INFO: Triggering a new rollout for deployment "test-recreate-deployment" May 6 18:26:21.489: INFO: Updating deployment test-recreate-deployment May 6 18:26:21.489: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59 May 6 18:26:22.586: INFO: Deployment "test-recreate-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment,GenerateName:,Namespace:e2e-tests-deployment-d7546,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-d7546/deployments/test-recreate-deployment,UID:12255800-8fc7-11ea-99e8-0242ac110002,ResourceVersion:9096473,Generation:2,CreationTimestamp:2020-05-06 18:26:15 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[{Available False 2020-05-06 18:26:22 +0000 UTC 2020-05-06 18:26:22 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2020-05-06 18:26:22 +0000 UTC 2020-05-06 18:26:15 +0000 UTC ReplicaSetUpdated ReplicaSet "test-recreate-deployment-589c4bfd" is progressing.}],ReadyReplicas:0,CollisionCount:nil,},} May 6 18:26:22.661: INFO: New ReplicaSet "test-recreate-deployment-589c4bfd" of Deployment "test-recreate-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-589c4bfd,GenerateName:,Namespace:e2e-tests-deployment-d7546,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-d7546/replicasets/test-recreate-deployment-589c4bfd,UID:15dbf9f0-8fc7-11ea-99e8-0242ac110002,ResourceVersion:9096470,Generation:1,CreationTimestamp:2020-05-06 18:26:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment 12255800-8fc7-11ea-99e8-0242ac110002 0xc001cf806f 0xc001cf8080}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} May 6 18:26:22.661: INFO: All old ReplicaSets of Deployment "test-recreate-deployment": May 6 18:26:22.661: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5bf7f65dc,GenerateName:,Namespace:e2e-tests-deployment-d7546,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-d7546/replicasets/test-recreate-deployment-5bf7f65dc,UID:1225d6fe-8fc7-11ea-99e8-0242ac110002,ResourceVersion:9096462,Generation:2,CreationTimestamp:2020-05-06 18:26:15 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5bf7f65dc,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment 12255800-8fc7-11ea-99e8-0242ac110002 0xc001cf8140 0xc001cf8141}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5bf7f65dc,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5bf7f65dc,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} May 6 18:26:22.666: INFO: Pod "test-recreate-deployment-589c4bfd-2zg8d" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-589c4bfd-2zg8d,GenerateName:test-recreate-deployment-589c4bfd-,Namespace:e2e-tests-deployment-d7546,SelfLink:/api/v1/namespaces/e2e-tests-deployment-d7546/pods/test-recreate-deployment-589c4bfd-2zg8d,UID:15e9328b-8fc7-11ea-99e8-0242ac110002,ResourceVersion:9096474,Generation:0,CreationTimestamp:2020-05-06 18:26:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-recreate-deployment-589c4bfd 15dbf9f0-8fc7-11ea-99e8-0242ac110002 0xc001df470f 0xc001df4720}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-vpvh5 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vpvh5,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-vpvh5 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001df4820} {node.kubernetes.io/unreachable Exists NoExecute 0xc001df4840}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 18:26:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-06 18:26:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-06 18:26:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 18:26:21 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.4,PodIP:,StartTime:2020-05-06 18:26:22 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 18:26:22.666: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-deployment-d7546" for this suite. May 6 18:26:28.867: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 18:26:28.900: INFO: namespace: e2e-tests-deployment-d7546, resource: bindings, ignored listing per whitelist May 6 18:26:28.939: INFO: namespace e2e-tests-deployment-d7546 deletion completed in 6.22811986s • [SLOW TEST:13.667 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl expose should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 18:26:28.940: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating Redis RC May 6 18:26:29.034: INFO: namespace e2e-tests-kubectl-lzjq7 May 6 18:26:29.034: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-lzjq7' May 6 18:26:31.705: INFO: stderr: "" May 6 18:26:31.705: INFO: stdout: "replicationcontroller/redis-master created\n" STEP: Waiting for Redis master to start. May 6 18:26:32.709: INFO: Selector matched 1 pods for map[app:redis] May 6 18:26:32.709: INFO: Found 0 / 1 May 6 18:26:33.710: INFO: Selector matched 1 pods for map[app:redis] May 6 18:26:33.710: INFO: Found 0 / 1 May 6 18:26:34.710: INFO: Selector matched 1 pods for map[app:redis] May 6 18:26:34.710: INFO: Found 0 / 1 May 6 18:26:35.710: INFO: Selector matched 1 pods for map[app:redis] May 6 18:26:35.710: INFO: Found 1 / 1 May 6 18:26:35.710: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 May 6 18:26:35.714: INFO: Selector matched 1 pods for map[app:redis] May 6 18:26:35.714: INFO: ForEach: Found 1 pods from the filter. Now looping through them. May 6 18:26:35.714: INFO: wait on redis-master startup in e2e-tests-kubectl-lzjq7 May 6 18:26:35.714: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-cqjjq redis-master --namespace=e2e-tests-kubectl-lzjq7' May 6 18:26:35.836: INFO: stderr: "" May 6 18:26:35.836: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 06 May 18:26:34.394 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 06 May 18:26:34.394 # Server started, Redis version 3.2.12\n1:M 06 May 18:26:34.394 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 06 May 18:26:34.395 * The server is now ready to accept connections on port 6379\n" STEP: exposing RC May 6 18:26:35.837: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose rc redis-master --name=rm2 --port=1234 --target-port=6379 --namespace=e2e-tests-kubectl-lzjq7' May 6 18:26:36.012: INFO: stderr: "" May 6 18:26:36.012: INFO: stdout: "service/rm2 exposed\n" May 6 18:26:36.073: INFO: Service rm2 in namespace e2e-tests-kubectl-lzjq7 found. STEP: exposing service May 6 18:26:38.080: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=e2e-tests-kubectl-lzjq7' May 6 18:26:38.262: INFO: stderr: "" May 6 18:26:38.262: INFO: stdout: "service/rm3 exposed\n" May 6 18:26:38.269: INFO: Service rm3 in namespace e2e-tests-kubectl-lzjq7 found. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 18:26:40.273: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-lzjq7" for this suite. May 6 18:27:02.410: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 18:27:02.477: INFO: namespace: e2e-tests-kubectl-lzjq7, resource: bindings, ignored listing per whitelist May 6 18:27:02.487: INFO: namespace e2e-tests-kubectl-lzjq7 deletion completed in 22.210556565s • [SLOW TEST:33.547 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl expose /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 18:27:02.487: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-2nhsr May 6 18:27:06.640: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-2nhsr STEP: checking the pod's current state and verifying that restartCount is present May 6 18:27:06.643: INFO: Initial restart count of pod liveness-http is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 18:31:07.291: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-2nhsr" for this suite. May 6 18:31:13.639: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 18:31:13.834: INFO: namespace: e2e-tests-container-probe-2nhsr, resource: bindings, ignored listing per whitelist May 6 18:31:13.887: INFO: namespace e2e-tests-container-probe-2nhsr deletion completed in 6.592448297s • [SLOW TEST:251.400 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 18:31:13.888: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Given a Pod with a 'name' label pod-adoption-release is created STEP: When a replicaset with a matching selector is created STEP: Then the orphan pod is adopted STEP: When the matched label of one of its pods change May 6 18:31:21.297: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 18:31:22.332: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-replicaset-xdmpw" for this suite. May 6 18:31:46.390: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 18:31:46.459: INFO: namespace: e2e-tests-replicaset-xdmpw, resource: bindings, ignored listing per whitelist May 6 18:31:46.466: INFO: namespace e2e-tests-replicaset-xdmpw deletion completed in 24.130017521s • [SLOW TEST:32.578 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 18:31:46.466: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test override all May 6 18:31:46.596: INFO: Waiting up to 5m0s for pod "client-containers-d78d00a0-8fc7-11ea-a618-0242ac110019" in namespace "e2e-tests-containers-zhd27" to be "success or failure" May 6 18:31:46.600: INFO: Pod "client-containers-d78d00a0-8fc7-11ea-a618-0242ac110019": Phase="Pending", Reason="", readiness=false. Elapsed: 3.739076ms May 6 18:31:48.604: INFO: Pod "client-containers-d78d00a0-8fc7-11ea-a618-0242ac110019": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008326352s May 6 18:31:50.608: INFO: Pod "client-containers-d78d00a0-8fc7-11ea-a618-0242ac110019": Phase="Pending", Reason="", readiness=false. Elapsed: 4.011493192s May 6 18:31:52.612: INFO: Pod "client-containers-d78d00a0-8fc7-11ea-a618-0242ac110019": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.0157293s STEP: Saw pod success May 6 18:31:52.612: INFO: Pod "client-containers-d78d00a0-8fc7-11ea-a618-0242ac110019" satisfied condition "success or failure" May 6 18:31:52.615: INFO: Trying to get logs from node hunter-worker pod client-containers-d78d00a0-8fc7-11ea-a618-0242ac110019 container test-container: STEP: delete the pod May 6 18:31:52.654: INFO: Waiting for pod client-containers-d78d00a0-8fc7-11ea-a618-0242ac110019 to disappear May 6 18:31:52.660: INFO: Pod client-containers-d78d00a0-8fc7-11ea-a618-0242ac110019 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 18:31:52.660: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-containers-zhd27" for this suite. May 6 18:31:58.878: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 18:31:58.886: INFO: namespace: e2e-tests-containers-zhd27, resource: bindings, ignored listing per whitelist May 6 18:31:58.953: INFO: namespace e2e-tests-containers-zhd27 deletion completed in 6.290266033s • [SLOW TEST:12.487 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 18:31:58.954: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for all rs to be garbage collected STEP: expected 0 rs, got 1 rs STEP: expected 0 pods, got 2 pods STEP: Gathering metrics W0506 18:32:01.847071 6 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 6 18:32:01.847: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 18:32:01.847: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-q9ch9" for this suite. May 6 18:32:07.907: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 18:32:07.919: INFO: namespace: e2e-tests-gc-q9ch9, resource: bindings, ignored listing per whitelist May 6 18:32:08.017: INFO: namespace e2e-tests-gc-q9ch9 deletion completed in 6.166891489s • [SLOW TEST:9.063 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 18:32:08.017: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 18:32:08.189: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubelet-test-2m6n5" for this suite. May 6 18:32:14.311: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 18:32:14.321: INFO: namespace: e2e-tests-kubelet-test-2m6n5, resource: bindings, ignored listing per whitelist May 6 18:32:14.404: INFO: namespace e2e-tests-kubelet-test-2m6n5 deletion completed in 6.141081866s • [SLOW TEST:6.387 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78 should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 18:32:14.404: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 6 18:32:14.766: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"e841e92e-8fc7-11ea-99e8-0242ac110002", Controller:(*bool)(0xc001bae392), BlockOwnerDeletion:(*bool)(0xc001bae393)}} May 6 18:32:14.775: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"e831340f-8fc7-11ea-99e8-0242ac110002", Controller:(*bool)(0xc001b090e2), BlockOwnerDeletion:(*bool)(0xc001b090e3)}} May 6 18:32:14.945: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"e8319bef-8fc7-11ea-99e8-0242ac110002", Controller:(*bool)(0xc00190f79a), BlockOwnerDeletion:(*bool)(0xc00190f79b)}} [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 18:32:19.970: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-k66n4" for this suite. May 6 18:32:26.280: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 18:32:26.545: INFO: namespace: e2e-tests-gc-k66n4, resource: bindings, ignored listing per whitelist May 6 18:32:26.594: INFO: namespace e2e-tests-gc-k66n4 deletion completed in 6.619510878s • [SLOW TEST:12.189 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 18:32:26.594: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74 STEP: Creating service test in namespace e2e-tests-statefulset-nwrpx [It] Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating stateful set ss in namespace e2e-tests-statefulset-nwrpx STEP: Waiting until all stateful set ss replicas will be running in namespace e2e-tests-statefulset-nwrpx May 6 18:32:28.185: INFO: Found 0 stateful pods, waiting for 1 May 6 18:32:38.191: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod May 6 18:32:38.194: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nwrpx ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' May 6 18:32:38.508: INFO: stderr: "I0506 18:32:38.374376 1991 log.go:172] (0xc00013a840) (0xc0005e9220) Create stream\nI0506 18:32:38.374451 1991 log.go:172] (0xc00013a840) (0xc0005e9220) Stream added, broadcasting: 1\nI0506 18:32:38.379416 1991 log.go:172] (0xc00013a840) Reply frame received for 1\nI0506 18:32:38.379497 1991 log.go:172] (0xc00013a840) (0xc000750000) Create stream\nI0506 18:32:38.379599 1991 log.go:172] (0xc00013a840) (0xc000750000) Stream added, broadcasting: 3\nI0506 18:32:38.380670 1991 log.go:172] (0xc00013a840) Reply frame received for 3\nI0506 18:32:38.380714 1991 log.go:172] (0xc00013a840) (0xc0003dc000) Create stream\nI0506 18:32:38.380733 1991 log.go:172] (0xc00013a840) (0xc0003dc000) Stream added, broadcasting: 5\nI0506 18:32:38.381915 1991 log.go:172] (0xc00013a840) Reply frame received for 5\nI0506 18:32:38.501056 1991 log.go:172] (0xc00013a840) Data frame received for 3\nI0506 18:32:38.501092 1991 log.go:172] (0xc000750000) (3) Data frame handling\nI0506 18:32:38.501229 1991 log.go:172] (0xc000750000) (3) Data frame sent\nI0506 18:32:38.501561 1991 log.go:172] (0xc00013a840) Data frame received for 3\nI0506 18:32:38.501593 1991 log.go:172] (0xc000750000) (3) Data frame handling\nI0506 18:32:38.501891 1991 log.go:172] (0xc00013a840) Data frame received for 5\nI0506 18:32:38.501916 1991 log.go:172] (0xc0003dc000) (5) Data frame handling\nI0506 18:32:38.503949 1991 log.go:172] (0xc00013a840) Data frame received for 1\nI0506 18:32:38.503967 1991 log.go:172] (0xc0005e9220) (1) Data frame handling\nI0506 18:32:38.503977 1991 log.go:172] (0xc0005e9220) (1) Data frame sent\nI0506 18:32:38.503985 1991 log.go:172] (0xc00013a840) (0xc0005e9220) Stream removed, broadcasting: 1\nI0506 18:32:38.504132 1991 log.go:172] (0xc00013a840) Go away received\nI0506 18:32:38.504194 1991 log.go:172] (0xc00013a840) (0xc0005e9220) Stream removed, broadcasting: 1\nI0506 18:32:38.504209 1991 log.go:172] (0xc00013a840) (0xc000750000) Stream removed, broadcasting: 3\nI0506 18:32:38.504219 1991 log.go:172] (0xc00013a840) (0xc0003dc000) Stream removed, broadcasting: 5\n" May 6 18:32:38.508: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" May 6 18:32:38.508: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' May 6 18:32:38.512: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true May 6 18:32:48.517: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false May 6 18:32:48.517: INFO: Waiting for statefulset status.replicas updated to 0 May 6 18:32:48.676: INFO: POD NODE PHASE GRACE CONDITIONS May 6 18:32:48.676: INFO: ss-0 hunter-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 18:32:28 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-06 18:32:38 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-06 18:32:38 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 18:32:28 +0000 UTC }] May 6 18:32:48.676: INFO: May 6 18:32:48.676: INFO: StatefulSet ss has not reached scale 3, at 1 May 6 18:32:49.682: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.850731787s May 6 18:32:50.685: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.845444953s May 6 18:32:51.690: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.841781902s May 6 18:32:52.694: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.836922332s May 6 18:32:53.700: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.832708338s May 6 18:32:54.706: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.826996769s May 6 18:32:55.710: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.821545093s May 6 18:32:56.715: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.817268297s May 6 18:32:57.719: INFO: Verifying statefulset ss doesn't scale past 3 for another 811.888627ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace e2e-tests-statefulset-nwrpx May 6 18:32:58.725: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nwrpx ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 6 18:32:58.951: INFO: stderr: "I0506 18:32:58.852593 2014 log.go:172] (0xc00015c840) (0xc00074e640) Create stream\nI0506 18:32:58.852689 2014 log.go:172] (0xc00015c840) (0xc00074e640) Stream added, broadcasting: 1\nI0506 18:32:58.855661 2014 log.go:172] (0xc00015c840) Reply frame received for 1\nI0506 18:32:58.855723 2014 log.go:172] (0xc00015c840) (0xc000606e60) Create stream\nI0506 18:32:58.855745 2014 log.go:172] (0xc00015c840) (0xc000606e60) Stream added, broadcasting: 3\nI0506 18:32:58.856698 2014 log.go:172] (0xc00015c840) Reply frame received for 3\nI0506 18:32:58.856728 2014 log.go:172] (0xc00015c840) (0xc00074e6e0) Create stream\nI0506 18:32:58.856737 2014 log.go:172] (0xc00015c840) (0xc00074e6e0) Stream added, broadcasting: 5\nI0506 18:32:58.858004 2014 log.go:172] (0xc00015c840) Reply frame received for 5\nI0506 18:32:58.943433 2014 log.go:172] (0xc00015c840) Data frame received for 3\nI0506 18:32:58.943464 2014 log.go:172] (0xc000606e60) (3) Data frame handling\nI0506 18:32:58.943488 2014 log.go:172] (0xc000606e60) (3) Data frame sent\nI0506 18:32:58.943508 2014 log.go:172] (0xc00015c840) Data frame received for 3\nI0506 18:32:58.943542 2014 log.go:172] (0xc00015c840) Data frame received for 5\nI0506 18:32:58.943562 2014 log.go:172] (0xc00074e6e0) (5) Data frame handling\nI0506 18:32:58.943582 2014 log.go:172] (0xc000606e60) (3) Data frame handling\nI0506 18:32:58.945523 2014 log.go:172] (0xc00015c840) Data frame received for 1\nI0506 18:32:58.945548 2014 log.go:172] (0xc00074e640) (1) Data frame handling\nI0506 18:32:58.945565 2014 log.go:172] (0xc00074e640) (1) Data frame sent\nI0506 18:32:58.945818 2014 log.go:172] (0xc00015c840) (0xc00074e640) Stream removed, broadcasting: 1\nI0506 18:32:58.945853 2014 log.go:172] (0xc00015c840) Go away received\nI0506 18:32:58.946075 2014 log.go:172] (0xc00015c840) (0xc00074e640) Stream removed, broadcasting: 1\nI0506 18:32:58.946096 2014 log.go:172] (0xc00015c840) (0xc000606e60) Stream removed, broadcasting: 3\nI0506 18:32:58.946105 2014 log.go:172] (0xc00015c840) (0xc00074e6e0) Stream removed, broadcasting: 5\n" May 6 18:32:58.951: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" May 6 18:32:58.951: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' May 6 18:32:58.951: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nwrpx ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 6 18:32:59.156: INFO: stderr: "I0506 18:32:59.086360 2038 log.go:172] (0xc000138160) (0xc000740640) Create stream\nI0506 18:32:59.086442 2038 log.go:172] (0xc000138160) (0xc000740640) Stream added, broadcasting: 1\nI0506 18:32:59.089237 2038 log.go:172] (0xc000138160) Reply frame received for 1\nI0506 18:32:59.089265 2038 log.go:172] (0xc000138160) (0xc0007406e0) Create stream\nI0506 18:32:59.089276 2038 log.go:172] (0xc000138160) (0xc0007406e0) Stream added, broadcasting: 3\nI0506 18:32:59.090537 2038 log.go:172] (0xc000138160) Reply frame received for 3\nI0506 18:32:59.090585 2038 log.go:172] (0xc000138160) (0xc000498c80) Create stream\nI0506 18:32:59.090607 2038 log.go:172] (0xc000138160) (0xc000498c80) Stream added, broadcasting: 5\nI0506 18:32:59.091704 2038 log.go:172] (0xc000138160) Reply frame received for 5\nI0506 18:32:59.149544 2038 log.go:172] (0xc000138160) Data frame received for 5\nI0506 18:32:59.149576 2038 log.go:172] (0xc000498c80) (5) Data frame handling\nI0506 18:32:59.149585 2038 log.go:172] (0xc000498c80) (5) Data frame sent\nI0506 18:32:59.149590 2038 log.go:172] (0xc000138160) Data frame received for 5\nI0506 18:32:59.149594 2038 log.go:172] (0xc000498c80) (5) Data frame handling\nmv: can't rename '/tmp/index.html': No such file or directory\nI0506 18:32:59.149612 2038 log.go:172] (0xc000138160) Data frame received for 3\nI0506 18:32:59.149618 2038 log.go:172] (0xc0007406e0) (3) Data frame handling\nI0506 18:32:59.149623 2038 log.go:172] (0xc0007406e0) (3) Data frame sent\nI0506 18:32:59.149627 2038 log.go:172] (0xc000138160) Data frame received for 3\nI0506 18:32:59.149631 2038 log.go:172] (0xc0007406e0) (3) Data frame handling\nI0506 18:32:59.151166 2038 log.go:172] (0xc000138160) Data frame received for 1\nI0506 18:32:59.151190 2038 log.go:172] (0xc000740640) (1) Data frame handling\nI0506 18:32:59.151231 2038 log.go:172] (0xc000740640) (1) Data frame sent\nI0506 18:32:59.151245 2038 log.go:172] (0xc000138160) (0xc000740640) Stream removed, broadcasting: 1\nI0506 18:32:59.151269 2038 log.go:172] (0xc000138160) Go away received\nI0506 18:32:59.151533 2038 log.go:172] (0xc000138160) (0xc000740640) Stream removed, broadcasting: 1\nI0506 18:32:59.151557 2038 log.go:172] (0xc000138160) (0xc0007406e0) Stream removed, broadcasting: 3\nI0506 18:32:59.151567 2038 log.go:172] (0xc000138160) (0xc000498c80) Stream removed, broadcasting: 5\n" May 6 18:32:59.156: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" May 6 18:32:59.156: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' May 6 18:32:59.156: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nwrpx ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 6 18:32:59.353: INFO: stderr: "I0506 18:32:59.291454 2062 log.go:172] (0xc0008202c0) (0xc000716640) Create stream\nI0506 18:32:59.291524 2062 log.go:172] (0xc0008202c0) (0xc000716640) Stream added, broadcasting: 1\nI0506 18:32:59.294622 2062 log.go:172] (0xc0008202c0) Reply frame received for 1\nI0506 18:32:59.294682 2062 log.go:172] (0xc0008202c0) (0xc000662c80) Create stream\nI0506 18:32:59.294701 2062 log.go:172] (0xc0008202c0) (0xc000662c80) Stream added, broadcasting: 3\nI0506 18:32:59.295696 2062 log.go:172] (0xc0008202c0) Reply frame received for 3\nI0506 18:32:59.295730 2062 log.go:172] (0xc0008202c0) (0xc0007166e0) Create stream\nI0506 18:32:59.295736 2062 log.go:172] (0xc0008202c0) (0xc0007166e0) Stream added, broadcasting: 5\nI0506 18:32:59.296969 2062 log.go:172] (0xc0008202c0) Reply frame received for 5\nI0506 18:32:59.347264 2062 log.go:172] (0xc0008202c0) Data frame received for 3\nI0506 18:32:59.347288 2062 log.go:172] (0xc000662c80) (3) Data frame handling\nI0506 18:32:59.347303 2062 log.go:172] (0xc000662c80) (3) Data frame sent\nI0506 18:32:59.347308 2062 log.go:172] (0xc0008202c0) Data frame received for 3\nI0506 18:32:59.347338 2062 log.go:172] (0xc0008202c0) Data frame received for 5\nI0506 18:32:59.347392 2062 log.go:172] (0xc0007166e0) (5) Data frame handling\nI0506 18:32:59.347413 2062 log.go:172] (0xc0007166e0) (5) Data frame sent\nI0506 18:32:59.347429 2062 log.go:172] (0xc0008202c0) Data frame received for 5\nI0506 18:32:59.347442 2062 log.go:172] (0xc0007166e0) (5) Data frame handling\nI0506 18:32:59.347537 2062 log.go:172] (0xc000662c80) (3) Data frame handling\nmv: can't rename '/tmp/index.html': No such file or directory\nI0506 18:32:59.348974 2062 log.go:172] (0xc0008202c0) Data frame received for 1\nI0506 18:32:59.348997 2062 log.go:172] (0xc000716640) (1) Data frame handling\nI0506 18:32:59.349008 2062 log.go:172] (0xc000716640) (1) Data frame sent\nI0506 18:32:59.349031 2062 log.go:172] (0xc0008202c0) (0xc000716640) Stream removed, broadcasting: 1\nI0506 18:32:59.349058 2062 log.go:172] (0xc0008202c0) Go away received\nI0506 18:32:59.349538 2062 log.go:172] (0xc0008202c0) (0xc000716640) Stream removed, broadcasting: 1\nI0506 18:32:59.349563 2062 log.go:172] (0xc0008202c0) (0xc000662c80) Stream removed, broadcasting: 3\nI0506 18:32:59.349574 2062 log.go:172] (0xc0008202c0) (0xc0007166e0) Stream removed, broadcasting: 5\n" May 6 18:32:59.354: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" May 6 18:32:59.354: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' May 6 18:32:59.358: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false May 6 18:33:09.363: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true May 6 18:33:09.363: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true May 6 18:33:09.363: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Scale down will not halt with unhealthy stateful pod May 6 18:33:09.367: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nwrpx ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' May 6 18:33:09.584: INFO: stderr: "I0506 18:33:09.506826 2084 log.go:172] (0xc00015c790) (0xc000647540) Create stream\nI0506 18:33:09.506881 2084 log.go:172] (0xc00015c790) (0xc000647540) Stream added, broadcasting: 1\nI0506 18:33:09.508921 2084 log.go:172] (0xc00015c790) Reply frame received for 1\nI0506 18:33:09.508965 2084 log.go:172] (0xc00015c790) (0xc000718000) Create stream\nI0506 18:33:09.508977 2084 log.go:172] (0xc00015c790) (0xc000718000) Stream added, broadcasting: 3\nI0506 18:33:09.510207 2084 log.go:172] (0xc00015c790) Reply frame received for 3\nI0506 18:33:09.510229 2084 log.go:172] (0xc00015c790) (0xc0007180a0) Create stream\nI0506 18:33:09.510237 2084 log.go:172] (0xc00015c790) (0xc0007180a0) Stream added, broadcasting: 5\nI0506 18:33:09.511124 2084 log.go:172] (0xc00015c790) Reply frame received for 5\nI0506 18:33:09.577705 2084 log.go:172] (0xc00015c790) Data frame received for 5\nI0506 18:33:09.577730 2084 log.go:172] (0xc0007180a0) (5) Data frame handling\nI0506 18:33:09.577775 2084 log.go:172] (0xc00015c790) Data frame received for 3\nI0506 18:33:09.577821 2084 log.go:172] (0xc000718000) (3) Data frame handling\nI0506 18:33:09.577854 2084 log.go:172] (0xc000718000) (3) Data frame sent\nI0506 18:33:09.577869 2084 log.go:172] (0xc00015c790) Data frame received for 3\nI0506 18:33:09.577883 2084 log.go:172] (0xc000718000) (3) Data frame handling\nI0506 18:33:09.579299 2084 log.go:172] (0xc00015c790) Data frame received for 1\nI0506 18:33:09.579318 2084 log.go:172] (0xc000647540) (1) Data frame handling\nI0506 18:33:09.579333 2084 log.go:172] (0xc000647540) (1) Data frame sent\nI0506 18:33:09.579354 2084 log.go:172] (0xc00015c790) (0xc000647540) Stream removed, broadcasting: 1\nI0506 18:33:09.579371 2084 log.go:172] (0xc00015c790) Go away received\nI0506 18:33:09.579622 2084 log.go:172] (0xc00015c790) (0xc000647540) Stream removed, broadcasting: 1\nI0506 18:33:09.579649 2084 log.go:172] (0xc00015c790) (0xc000718000) Stream removed, broadcasting: 3\nI0506 18:33:09.579664 2084 log.go:172] (0xc00015c790) (0xc0007180a0) Stream removed, broadcasting: 5\n" May 6 18:33:09.584: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" May 6 18:33:09.584: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' May 6 18:33:09.584: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nwrpx ss-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' May 6 18:33:09.806: INFO: stderr: "I0506 18:33:09.706604 2106 log.go:172] (0xc000794160) (0xc0006c26e0) Create stream\nI0506 18:33:09.706657 2106 log.go:172] (0xc000794160) (0xc0006c26e0) Stream added, broadcasting: 1\nI0506 18:33:09.709473 2106 log.go:172] (0xc000794160) Reply frame received for 1\nI0506 18:33:09.709531 2106 log.go:172] (0xc000794160) (0xc0003ccaa0) Create stream\nI0506 18:33:09.709550 2106 log.go:172] (0xc000794160) (0xc0003ccaa0) Stream added, broadcasting: 3\nI0506 18:33:09.710416 2106 log.go:172] (0xc000794160) Reply frame received for 3\nI0506 18:33:09.710450 2106 log.go:172] (0xc000794160) (0xc000418000) Create stream\nI0506 18:33:09.710459 2106 log.go:172] (0xc000794160) (0xc000418000) Stream added, broadcasting: 5\nI0506 18:33:09.711293 2106 log.go:172] (0xc000794160) Reply frame received for 5\nI0506 18:33:09.797768 2106 log.go:172] (0xc000794160) Data frame received for 3\nI0506 18:33:09.797816 2106 log.go:172] (0xc0003ccaa0) (3) Data frame handling\nI0506 18:33:09.797839 2106 log.go:172] (0xc0003ccaa0) (3) Data frame sent\nI0506 18:33:09.797858 2106 log.go:172] (0xc000794160) Data frame received for 3\nI0506 18:33:09.797871 2106 log.go:172] (0xc0003ccaa0) (3) Data frame handling\nI0506 18:33:09.797895 2106 log.go:172] (0xc000794160) Data frame received for 5\nI0506 18:33:09.797938 2106 log.go:172] (0xc000418000) (5) Data frame handling\nI0506 18:33:09.800784 2106 log.go:172] (0xc000794160) Data frame received for 1\nI0506 18:33:09.800805 2106 log.go:172] (0xc0006c26e0) (1) Data frame handling\nI0506 18:33:09.800816 2106 log.go:172] (0xc0006c26e0) (1) Data frame sent\nI0506 18:33:09.800830 2106 log.go:172] (0xc000794160) (0xc0006c26e0) Stream removed, broadcasting: 1\nI0506 18:33:09.800866 2106 log.go:172] (0xc000794160) Go away received\nI0506 18:33:09.801082 2106 log.go:172] (0xc000794160) (0xc0006c26e0) Stream removed, broadcasting: 1\nI0506 18:33:09.801337 2106 log.go:172] (0xc000794160) (0xc0003ccaa0) Stream removed, broadcasting: 3\nI0506 18:33:09.801363 2106 log.go:172] (0xc000794160) (0xc000418000) Stream removed, broadcasting: 5\n" May 6 18:33:09.806: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" May 6 18:33:09.806: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' May 6 18:33:09.806: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nwrpx ss-2 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' May 6 18:33:10.038: INFO: stderr: "I0506 18:33:09.940381 2128 log.go:172] (0xc00075a370) (0xc0005fd360) Create stream\nI0506 18:33:09.940555 2128 log.go:172] (0xc00075a370) (0xc0005fd360) Stream added, broadcasting: 1\nI0506 18:33:09.943228 2128 log.go:172] (0xc00075a370) Reply frame received for 1\nI0506 18:33:09.943277 2128 log.go:172] (0xc00075a370) (0xc0005fd400) Create stream\nI0506 18:33:09.943291 2128 log.go:172] (0xc00075a370) (0xc0005fd400) Stream added, broadcasting: 3\nI0506 18:33:09.944122 2128 log.go:172] (0xc00075a370) Reply frame received for 3\nI0506 18:33:09.944159 2128 log.go:172] (0xc00075a370) (0xc0005e2000) Create stream\nI0506 18:33:09.944170 2128 log.go:172] (0xc00075a370) (0xc0005e2000) Stream added, broadcasting: 5\nI0506 18:33:09.945048 2128 log.go:172] (0xc00075a370) Reply frame received for 5\nI0506 18:33:10.031856 2128 log.go:172] (0xc00075a370) Data frame received for 3\nI0506 18:33:10.031900 2128 log.go:172] (0xc00075a370) Data frame received for 5\nI0506 18:33:10.031924 2128 log.go:172] (0xc0005e2000) (5) Data frame handling\nI0506 18:33:10.031947 2128 log.go:172] (0xc0005fd400) (3) Data frame handling\nI0506 18:33:10.031959 2128 log.go:172] (0xc0005fd400) (3) Data frame sent\nI0506 18:33:10.031969 2128 log.go:172] (0xc00075a370) Data frame received for 3\nI0506 18:33:10.031978 2128 log.go:172] (0xc0005fd400) (3) Data frame handling\nI0506 18:33:10.033604 2128 log.go:172] (0xc00075a370) Data frame received for 1\nI0506 18:33:10.033623 2128 log.go:172] (0xc0005fd360) (1) Data frame handling\nI0506 18:33:10.033634 2128 log.go:172] (0xc0005fd360) (1) Data frame sent\nI0506 18:33:10.033668 2128 log.go:172] (0xc00075a370) (0xc0005fd360) Stream removed, broadcasting: 1\nI0506 18:33:10.033721 2128 log.go:172] (0xc00075a370) Go away received\nI0506 18:33:10.033869 2128 log.go:172] (0xc00075a370) (0xc0005fd360) Stream removed, broadcasting: 1\nI0506 18:33:10.033895 2128 log.go:172] (0xc00075a370) (0xc0005fd400) Stream removed, broadcasting: 3\nI0506 18:33:10.033917 2128 log.go:172] (0xc00075a370) (0xc0005e2000) Stream removed, broadcasting: 5\n" May 6 18:33:10.038: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" May 6 18:33:10.038: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' May 6 18:33:10.038: INFO: Waiting for statefulset status.replicas updated to 0 May 6 18:33:10.042: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2 May 6 18:33:20.051: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false May 6 18:33:20.051: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false May 6 18:33:20.051: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false May 6 18:33:20.076: INFO: POD NODE PHASE GRACE CONDITIONS May 6 18:33:20.076: INFO: ss-0 hunter-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 18:32:28 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-06 18:33:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-06 18:33:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 18:32:28 +0000 UTC }] May 6 18:33:20.077: INFO: ss-1 hunter-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 18:32:48 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-06 18:33:10 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-06 18:33:10 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 18:32:48 +0000 UTC }] May 6 18:33:20.077: INFO: ss-2 hunter-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 18:32:48 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-06 18:33:10 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-06 18:33:10 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 18:32:48 +0000 UTC }] May 6 18:33:20.077: INFO: May 6 18:33:20.077: INFO: StatefulSet ss has not reached scale 0, at 3 May 6 18:33:21.294: INFO: POD NODE PHASE GRACE CONDITIONS May 6 18:33:21.294: INFO: ss-0 hunter-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 18:32:28 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-06 18:33:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-06 18:33:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 18:32:28 +0000 UTC }] May 6 18:33:21.294: INFO: ss-1 hunter-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 18:32:48 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-06 18:33:10 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-06 18:33:10 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 18:32:48 +0000 UTC }] May 6 18:33:21.294: INFO: ss-2 hunter-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 18:32:48 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-06 18:33:10 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-06 18:33:10 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 18:32:48 +0000 UTC }] May 6 18:33:21.294: INFO: May 6 18:33:21.294: INFO: StatefulSet ss has not reached scale 0, at 3 May 6 18:33:22.327: INFO: POD NODE PHASE GRACE CONDITIONS May 6 18:33:22.327: INFO: ss-0 hunter-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 18:32:28 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-06 18:33:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-06 18:33:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 18:32:28 +0000 UTC }] May 6 18:33:22.327: INFO: ss-1 hunter-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 18:32:48 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-06 18:33:10 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-06 18:33:10 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 18:32:48 +0000 UTC }] May 6 18:33:22.327: INFO: ss-2 hunter-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 18:32:48 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-06 18:33:10 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-06 18:33:10 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 18:32:48 +0000 UTC }] May 6 18:33:22.327: INFO: May 6 18:33:22.328: INFO: StatefulSet ss has not reached scale 0, at 3 May 6 18:33:23.333: INFO: POD NODE PHASE GRACE CONDITIONS May 6 18:33:23.333: INFO: ss-0 hunter-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 18:32:28 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-06 18:33:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-06 18:33:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 18:32:28 +0000 UTC }] May 6 18:33:23.333: INFO: ss-1 hunter-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 18:32:48 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-06 18:33:10 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-06 18:33:10 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 18:32:48 +0000 UTC }] May 6 18:33:23.333: INFO: ss-2 hunter-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 18:32:48 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-06 18:33:10 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-06 18:33:10 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 18:32:48 +0000 UTC }] May 6 18:33:23.333: INFO: May 6 18:33:23.333: INFO: StatefulSet ss has not reached scale 0, at 3 May 6 18:33:24.339: INFO: POD NODE PHASE GRACE CONDITIONS May 6 18:33:24.339: INFO: ss-0 hunter-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 18:32:28 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-06 18:33:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-06 18:33:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 18:32:28 +0000 UTC }] May 6 18:33:24.339: INFO: ss-1 hunter-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 18:32:48 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-06 18:33:10 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-06 18:33:10 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 18:32:48 +0000 UTC }] May 6 18:33:24.339: INFO: ss-2 hunter-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 18:32:48 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-06 18:33:10 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-06 18:33:10 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 18:32:48 +0000 UTC }] May 6 18:33:24.339: INFO: May 6 18:33:24.339: INFO: StatefulSet ss has not reached scale 0, at 3 May 6 18:33:25.343: INFO: POD NODE PHASE GRACE CONDITIONS May 6 18:33:25.343: INFO: ss-0 hunter-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 18:32:28 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-06 18:33:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-06 18:33:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 18:32:28 +0000 UTC }] May 6 18:33:25.343: INFO: ss-1 hunter-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 18:32:48 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-06 18:33:10 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-06 18:33:10 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 18:32:48 +0000 UTC }] May 6 18:33:25.343: INFO: ss-2 hunter-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 18:32:48 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-06 18:33:10 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-06 18:33:10 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 18:32:48 +0000 UTC }] May 6 18:33:25.343: INFO: May 6 18:33:25.343: INFO: StatefulSet ss has not reached scale 0, at 3 May 6 18:33:26.349: INFO: POD NODE PHASE GRACE CONDITIONS May 6 18:33:26.349: INFO: ss-0 hunter-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 18:32:28 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-06 18:33:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-06 18:33:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 18:32:28 +0000 UTC }] May 6 18:33:26.349: INFO: ss-1 hunter-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 18:32:48 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-06 18:33:10 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-06 18:33:10 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 18:32:48 +0000 UTC }] May 6 18:33:26.349: INFO: ss-2 hunter-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 18:32:48 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-06 18:33:10 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-06 18:33:10 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 18:32:48 +0000 UTC }] May 6 18:33:26.349: INFO: May 6 18:33:26.349: INFO: StatefulSet ss has not reached scale 0, at 3 May 6 18:33:27.431: INFO: POD NODE PHASE GRACE CONDITIONS May 6 18:33:27.431: INFO: ss-0 hunter-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 18:32:28 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-06 18:33:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-06 18:33:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 18:32:28 +0000 UTC }] May 6 18:33:27.431: INFO: ss-1 hunter-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 18:32:48 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-06 18:33:10 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-06 18:33:10 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 18:32:48 +0000 UTC }] May 6 18:33:27.431: INFO: ss-2 hunter-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 18:32:48 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-06 18:33:10 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-06 18:33:10 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 18:32:48 +0000 UTC }] May 6 18:33:27.431: INFO: May 6 18:33:27.431: INFO: StatefulSet ss has not reached scale 0, at 3 May 6 18:33:28.436: INFO: POD NODE PHASE GRACE CONDITIONS May 6 18:33:28.436: INFO: ss-0 hunter-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 18:32:28 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-06 18:33:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-06 18:33:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 18:32:28 +0000 UTC }] May 6 18:33:28.436: INFO: ss-1 hunter-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 18:32:48 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-06 18:33:10 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-06 18:33:10 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 18:32:48 +0000 UTC }] May 6 18:33:28.436: INFO: ss-2 hunter-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 18:32:48 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-06 18:33:10 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-06 18:33:10 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 18:32:48 +0000 UTC }] May 6 18:33:28.436: INFO: May 6 18:33:28.436: INFO: StatefulSet ss has not reached scale 0, at 3 May 6 18:33:29.440: INFO: POD NODE PHASE GRACE CONDITIONS May 6 18:33:29.440: INFO: ss-0 hunter-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 18:32:28 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-06 18:33:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-06 18:33:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 18:32:28 +0000 UTC }] May 6 18:33:29.440: INFO: ss-1 hunter-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 18:32:48 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-06 18:33:10 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-06 18:33:10 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 18:32:48 +0000 UTC }] May 6 18:33:29.440: INFO: ss-2 hunter-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 18:32:48 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-06 18:33:10 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-06 18:33:10 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 18:32:48 +0000 UTC }] May 6 18:33:29.440: INFO: May 6 18:33:29.440: INFO: StatefulSet ss has not reached scale 0, at 3 STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacee2e-tests-statefulset-nwrpx May 6 18:33:30.445: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nwrpx ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 6 18:33:30.588: INFO: rc: 1 May 6 18:33:30.589: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nwrpx ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] error: unable to upgrade connection: container not found ("nginx") [] 0xc0012dcd20 exit status 1 true [0xc000a980b8 0xc000a980d0 0xc000a980e8] [0xc000a980b8 0xc000a980d0 0xc000a980e8] [0xc000a980c8 0xc000a980e0] [0x935700 0x935700] 0xc0023bcba0 }: Command stdout: stderr: error: unable to upgrade connection: container not found ("nginx") error: exit status 1 May 6 18:33:40.589: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nwrpx ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 6 18:33:40.675: INFO: rc: 1 May 6 18:33:40.675: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nwrpx ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001b8dfb0 exit status 1 true [0xc0003dc940 0xc0003dc990 0xc0003dc9c8] [0xc0003dc940 0xc0003dc990 0xc0003dc9c8] [0xc0003dc970 0xc0003dc9b0] [0x935700 0x935700] 0xc002649e60 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 6 18:33:50.676: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nwrpx ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 6 18:33:50.862: INFO: rc: 1 May 6 18:33:50.862: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nwrpx ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00132d6e0 exit status 1 true [0xc001a162f8 0xc001a16310 0xc001a16328] [0xc001a162f8 0xc001a16310 0xc001a16328] [0xc001a16308 0xc001a16320] [0x935700 0x935700] 0xc00215bda0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 6 18:34:00.863: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nwrpx ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 6 18:34:00.952: INFO: rc: 1 May 6 18:34:00.952: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nwrpx ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00132d800 exit status 1 true [0xc001a16330 0xc001a16348 0xc001a16360] [0xc001a16330 0xc001a16348 0xc001a16360] [0xc001a16340 0xc001a16358] [0x935700 0x935700] 0xc0021d6060 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 6 18:34:10.952: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nwrpx ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 6 18:34:11.044: INFO: rc: 1 May 6 18:34:11.045: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nwrpx ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00132d9b0 exit status 1 true [0xc001a16368 0xc001a16380 0xc001a16398] [0xc001a16368 0xc001a16380 0xc001a16398] [0xc001a16378 0xc001a16390] [0x935700 0x935700] 0xc0021d6300 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 6 18:34:21.045: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nwrpx ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 6 18:34:21.145: INFO: rc: 1 May 6 18:34:21.146: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nwrpx ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00132dad0 exit status 1 true [0xc001a163a0 0xc001a163b8 0xc001a163d0] [0xc001a163a0 0xc001a163b8 0xc001a163d0] [0xc001a163b0 0xc001a163c8] [0x935700 0x935700] 0xc0021d6600 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 6 18:34:31.146: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nwrpx ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 6 18:34:31.246: INFO: rc: 1 May 6 18:34:31.246: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nwrpx ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00230c270 exit status 1 true [0xc00029cc18 0xc00029cca0 0xc00029ccd8] [0xc00029cc18 0xc00029cca0 0xc00029ccd8] [0xc00029cc68 0xc00029ccc8] [0x935700 0x935700] 0xc0010dc7e0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 6 18:34:41.246: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nwrpx ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 6 18:34:41.337: INFO: rc: 1 May 6 18:34:41.337: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nwrpx ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0003f6ba0 exit status 1 true [0xc00016e000 0xc00000e298 0xc00039c198] [0xc00016e000 0xc00000e298 0xc00039c198] [0xc00000e238 0xc00039c100] [0x935700 0x935700] 0xc0024024e0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 6 18:34:51.337: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nwrpx ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 6 18:34:51.446: INFO: rc: 1 May 6 18:34:51.446: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nwrpx ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001686240 exit status 1 true [0xc0003dc070 0xc0003dc0f0 0xc0003dc1b0] [0xc0003dc070 0xc0003dc0f0 0xc0003dc1b0] [0xc0003dc0e0 0xc0003dc1a0] [0x935700 0x935700] 0xc00215a1e0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 6 18:35:01.446: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nwrpx ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 6 18:35:01.545: INFO: rc: 1 May 6 18:35:01.545: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nwrpx ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0003f6d20 exit status 1 true [0xc00039c228 0xc00039c348 0xc00039c5a8] [0xc00039c228 0xc00039c348 0xc00039c5a8] [0xc00039c338 0xc00039c4c8] [0x935700 0x935700] 0xc002402cc0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 6 18:35:11.545: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nwrpx ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 6 18:35:11.637: INFO: rc: 1 May 6 18:35:11.637: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nwrpx ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0016863c0 exit status 1 true [0xc0003dc1c8 0xc0003dc2d8 0xc0003dc350] [0xc0003dc1c8 0xc0003dc2d8 0xc0003dc350] [0xc0003dc2a8 0xc0003dc348] [0x935700 0x935700] 0xc00215a5a0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 6 18:35:21.637: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nwrpx ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 6 18:35:21.740: INFO: rc: 1 May 6 18:35:21.740: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nwrpx ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0002bd980 exit status 1 true [0xc001a16000 0xc001a16018 0xc001a16030] [0xc001a16000 0xc001a16018 0xc001a16030] [0xc001a16010 0xc001a16028] [0x935700 0x935700] 0xc001cc81e0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 6 18:35:31.740: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nwrpx ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 6 18:35:31.834: INFO: rc: 1 May 6 18:35:31.834: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nwrpx ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0016865a0 exit status 1 true [0xc0003dc3d0 0xc0003dc438 0xc0003dc4c0] [0xc0003dc3d0 0xc0003dc438 0xc0003dc4c0] [0xc0003dc420 0xc0003dc498] [0x935700 0x935700] 0xc00215a8a0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 6 18:35:41.835: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nwrpx ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 6 18:35:41.921: INFO: rc: 1 May 6 18:35:41.921: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nwrpx ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00230c420 exit status 1 true [0xc00029cce8 0xc00029cd78 0xc00029cdd8] [0xc00029cce8 0xc00029cd78 0xc00029cdd8] [0xc00029cd48 0xc00029cd90] [0x935700 0x935700] 0xc0010dca80 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 6 18:35:51.921: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nwrpx ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 6 18:35:52.008: INFO: rc: 1 May 6 18:35:52.008: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nwrpx ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0002bdaa0 exit status 1 true [0xc001a16038 0xc001a16050 0xc001a16068] [0xc001a16038 0xc001a16050 0xc001a16068] [0xc001a16048 0xc001a16060] [0x935700 0x935700] 0xc001cc8480 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 6 18:36:02.008: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nwrpx ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 6 18:36:02.109: INFO: rc: 1 May 6 18:36:02.109: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nwrpx ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001686750 exit status 1 true [0xc0003dc4d0 0xc0003dc530 0xc0003dc598] [0xc0003dc4d0 0xc0003dc530 0xc0003dc598] [0xc0003dc4e8 0xc0003dc590] [0x935700 0x935700] 0xc00215bc80 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 6 18:36:12.109: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nwrpx ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 6 18:36:12.210: INFO: rc: 1 May 6 18:36:12.210: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nwrpx ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0002bdbf0 exit status 1 true [0xc001a16070 0xc001a16088 0xc001a160a0] [0xc001a16070 0xc001a16088 0xc001a160a0] [0xc001a16080 0xc001a16098] [0x935700 0x935700] 0xc001cc8720 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 6 18:36:22.210: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nwrpx ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 6 18:36:22.300: INFO: rc: 1 May 6 18:36:22.300: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nwrpx ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0016868a0 exit status 1 true [0xc0003dc5a0 0xc0003dc670 0xc0003dc740] [0xc0003dc5a0 0xc0003dc670 0xc0003dc740] [0xc0003dc648 0xc0003dc738] [0x935700 0x935700] 0xc00215bf20 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 6 18:36:32.300: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nwrpx ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 6 18:36:32.395: INFO: rc: 1 May 6 18:36:32.395: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nwrpx ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0003f6bd0 exit status 1 true [0xc00000e238 0xc00039c0b0 0xc00039c228] [0xc00000e238 0xc00039c0b0 0xc00039c228] [0xc00016e000 0xc00039c198] [0x935700 0x935700] 0xc00215a1e0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 6 18:36:42.396: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nwrpx ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 6 18:36:42.484: INFO: rc: 1 May 6 18:36:42.484: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nwrpx ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001686270 exit status 1 true [0xc0003dc070 0xc0003dc0f0 0xc0003dc1b0] [0xc0003dc070 0xc0003dc0f0 0xc0003dc1b0] [0xc0003dc0e0 0xc0003dc1a0] [0x935700 0x935700] 0xc0024024e0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 6 18:36:52.484: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nwrpx ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 6 18:36:52.575: INFO: rc: 1 May 6 18:36:52.575: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nwrpx ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00230c2a0 exit status 1 true [0xc00029cbc0 0xc00029cc68 0xc00029ccc8] [0xc00029cbc0 0xc00029cc68 0xc00029ccc8] [0xc00029cc50 0xc00029ccc0] [0x935700 0x935700] 0xc0010dc7e0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 6 18:37:02.576: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nwrpx ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 6 18:37:02.661: INFO: rc: 1 May 6 18:37:02.662: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nwrpx ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00230c450 exit status 1 true [0xc00029ccd8 0xc00029cd48 0xc00029cd90] [0xc00029ccd8 0xc00029cd48 0xc00029cd90] [0xc00029cd18 0xc00029cd80] [0x935700 0x935700] 0xc0010dca80 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 6 18:37:12.662: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nwrpx ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 6 18:37:12.753: INFO: rc: 1 May 6 18:37:12.754: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nwrpx ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0003f6ed0 exit status 1 true [0xc00039c260 0xc00039c420 0xc00039c5c8] [0xc00039c260 0xc00039c420 0xc00039c5c8] [0xc00039c348 0xc00039c5a8] [0x935700 0x935700] 0xc00215a5a0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 6 18:37:22.754: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nwrpx ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 6 18:37:22.843: INFO: rc: 1 May 6 18:37:22.843: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nwrpx ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0003f7110 exit status 1 true [0xc00039c6a0 0xc00039c7c0 0xc00039c928] [0xc00039c6a0 0xc00039c7c0 0xc00039c928] [0xc00039c7b8 0xc00039c870] [0x935700 0x935700] 0xc00215a8a0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 6 18:37:32.843: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nwrpx ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 6 18:37:32.946: INFO: rc: 1 May 6 18:37:32.946: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nwrpx ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0016863f0 exit status 1 true [0xc0003dc1c8 0xc0003dc2d8 0xc0003dc350] [0xc0003dc1c8 0xc0003dc2d8 0xc0003dc350] [0xc0003dc2a8 0xc0003dc348] [0x935700 0x935700] 0xc002402cc0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 6 18:37:42.946: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nwrpx ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 6 18:37:43.038: INFO: rc: 1 May 6 18:37:43.038: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nwrpx ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001686630 exit status 1 true [0xc0003dc3d0 0xc0003dc438 0xc0003dc4c0] [0xc0003dc3d0 0xc0003dc438 0xc0003dc4c0] [0xc0003dc420 0xc0003dc498] [0x935700 0x935700] 0xc0024039e0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 6 18:37:53.038: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nwrpx ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 6 18:37:53.118: INFO: rc: 1 May 6 18:37:53.118: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nwrpx ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0002bda40 exit status 1 true [0xc001a16000 0xc001a16018 0xc001a16030] [0xc001a16000 0xc001a16018 0xc001a16030] [0xc001a16010 0xc001a16028] [0x935700 0x935700] 0xc001cc81e0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 6 18:38:03.118: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nwrpx ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 6 18:38:03.204: INFO: rc: 1 May 6 18:38:03.204: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nwrpx ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00230c570 exit status 1 true [0xc00029cdd8 0xc00029ce18 0xc00029ce90] [0xc00029cdd8 0xc00029ce18 0xc00029ce90] [0xc00029cdf8 0xc00029ce70] [0x935700 0x935700] 0xc0010dcd20 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 6 18:38:13.204: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nwrpx ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 6 18:38:13.297: INFO: rc: 1 May 6 18:38:13.297: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nwrpx ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0002bdbc0 exit status 1 true [0xc001a16038 0xc001a16050 0xc001a16068] [0xc001a16038 0xc001a16050 0xc001a16068] [0xc001a16048 0xc001a16060] [0x935700 0x935700] 0xc001cc8480 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 6 18:38:23.297: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nwrpx ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 6 18:38:23.386: INFO: rc: 1 May 6 18:38:23.386: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nwrpx ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00230c6c0 exit status 1 true [0xc00029ced0 0xc00029cf78 0xc00029cfc8] [0xc00029ced0 0xc00029cf78 0xc00029cfc8] [0xc00029cf50 0xc00029cfb8] [0x935700 0x935700] 0xc0010dd0e0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 6 18:38:33.386: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nwrpx ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 6 18:38:33.471: INFO: rc: 1 May 6 18:38:33.471: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: May 6 18:38:33.471: INFO: Scaling statefulset ss to 0 May 6 18:38:33.480: INFO: Waiting for statefulset status.replicas updated to 0 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85 May 6 18:38:33.483: INFO: Deleting all statefulset in ns e2e-tests-statefulset-nwrpx May 6 18:38:33.485: INFO: Scaling statefulset ss to 0 May 6 18:38:33.491: INFO: Waiting for statefulset status.replicas updated to 0 May 6 18:38:33.493: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 18:38:33.517: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-statefulset-nwrpx" for this suite. May 6 18:38:40.222: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 18:38:40.291: INFO: namespace: e2e-tests-statefulset-nwrpx, resource: bindings, ignored listing per whitelist May 6 18:38:40.327: INFO: namespace e2e-tests-statefulset-nwrpx deletion completed in 6.805912308s • [SLOW TEST:373.733 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-network] Services should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 18:38:40.327: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:85 [It] should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating service endpoint-test2 in namespace e2e-tests-services-wzfkw STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-wzfkw to expose endpoints map[] May 6 18:38:40.503: INFO: Get endpoints failed (13.598816ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found May 6 18:38:41.507: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-wzfkw exposes endpoints map[] (1.017400258s elapsed) STEP: Creating pod pod1 in namespace e2e-tests-services-wzfkw STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-wzfkw to expose endpoints map[pod1:[80]] May 6 18:38:44.564: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-wzfkw exposes endpoints map[pod1:[80]] (3.049258386s elapsed) STEP: Creating pod pod2 in namespace e2e-tests-services-wzfkw STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-wzfkw to expose endpoints map[pod1:[80] pod2:[80]] May 6 18:38:48.667: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-wzfkw exposes endpoints map[pod1:[80] pod2:[80]] (4.098406761s elapsed) STEP: Deleting pod pod1 in namespace e2e-tests-services-wzfkw STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-wzfkw to expose endpoints map[pod2:[80]] May 6 18:38:49.725: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-wzfkw exposes endpoints map[pod2:[80]] (1.053651668s elapsed) STEP: Deleting pod pod2 in namespace e2e-tests-services-wzfkw STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-wzfkw to expose endpoints map[] May 6 18:38:50.754: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-wzfkw exposes endpoints map[] (1.02532068s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 18:38:50.857: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-services-wzfkw" for this suite. May 6 18:39:12.878: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 18:39:12.898: INFO: namespace: e2e-tests-services-wzfkw, resource: bindings, ignored listing per whitelist May 6 18:39:12.974: INFO: namespace e2e-tests-services-wzfkw deletion completed in 22.105864994s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:90 • [SLOW TEST:32.647 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 18:39:12.975: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with secret that has name projected-secret-test-e1b2c5fd-8fc8-11ea-a618-0242ac110019 STEP: Creating a pod to test consume secrets May 6 18:39:13.116: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-e1b37b45-8fc8-11ea-a618-0242ac110019" in namespace "e2e-tests-projected-z5tq4" to be "success or failure" May 6 18:39:13.138: INFO: Pod "pod-projected-secrets-e1b37b45-8fc8-11ea-a618-0242ac110019": Phase="Pending", Reason="", readiness=false. Elapsed: 21.855785ms May 6 18:39:15.274: INFO: Pod "pod-projected-secrets-e1b37b45-8fc8-11ea-a618-0242ac110019": Phase="Pending", Reason="", readiness=false. Elapsed: 2.157856837s May 6 18:39:17.278: INFO: Pod "pod-projected-secrets-e1b37b45-8fc8-11ea-a618-0242ac110019": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.161915887s STEP: Saw pod success May 6 18:39:17.278: INFO: Pod "pod-projected-secrets-e1b37b45-8fc8-11ea-a618-0242ac110019" satisfied condition "success or failure" May 6 18:39:17.280: INFO: Trying to get logs from node hunter-worker pod pod-projected-secrets-e1b37b45-8fc8-11ea-a618-0242ac110019 container projected-secret-volume-test: STEP: delete the pod May 6 18:39:17.320: INFO: Waiting for pod pod-projected-secrets-e1b37b45-8fc8-11ea-a618-0242ac110019 to disappear May 6 18:39:17.348: INFO: Pod pod-projected-secrets-e1b37b45-8fc8-11ea-a618-0242ac110019 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 18:39:17.348: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-z5tq4" for this suite. May 6 18:39:23.402: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 18:39:23.483: INFO: namespace: e2e-tests-projected-z5tq4, resource: bindings, ignored listing per whitelist May 6 18:39:23.497: INFO: namespace e2e-tests-projected-z5tq4 deletion completed in 6.145055212s • [SLOW TEST:10.523 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 18:39:23.498: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-map-e7f5de03-8fc8-11ea-a618-0242ac110019 STEP: Creating a pod to test consume configMaps May 6 18:39:23.632: INFO: Waiting up to 5m0s for pod "pod-configmaps-e7f7e773-8fc8-11ea-a618-0242ac110019" in namespace "e2e-tests-configmap-fkwdn" to be "success or failure" May 6 18:39:23.680: INFO: Pod "pod-configmaps-e7f7e773-8fc8-11ea-a618-0242ac110019": Phase="Pending", Reason="", readiness=false. Elapsed: 47.998932ms May 6 18:39:25.711: INFO: Pod "pod-configmaps-e7f7e773-8fc8-11ea-a618-0242ac110019": Phase="Pending", Reason="", readiness=false. Elapsed: 2.078197449s May 6 18:39:27.714: INFO: Pod "pod-configmaps-e7f7e773-8fc8-11ea-a618-0242ac110019": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.0816732s STEP: Saw pod success May 6 18:39:27.714: INFO: Pod "pod-configmaps-e7f7e773-8fc8-11ea-a618-0242ac110019" satisfied condition "success or failure" May 6 18:39:27.717: INFO: Trying to get logs from node hunter-worker2 pod pod-configmaps-e7f7e773-8fc8-11ea-a618-0242ac110019 container configmap-volume-test: STEP: delete the pod May 6 18:39:27.732: INFO: Waiting for pod pod-configmaps-e7f7e773-8fc8-11ea-a618-0242ac110019 to disappear May 6 18:39:27.737: INFO: Pod pod-configmaps-e7f7e773-8fc8-11ea-a618-0242ac110019 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 18:39:27.737: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-fkwdn" for this suite. May 6 18:39:33.753: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 18:39:33.820: INFO: namespace: e2e-tests-configmap-fkwdn, resource: bindings, ignored listing per whitelist May 6 18:39:33.830: INFO: namespace e2e-tests-configmap-fkwdn deletion completed in 6.090063646s • [SLOW TEST:10.333 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 18:39:33.831: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with configMap that has name projected-configmap-test-upd-ee2a5fcf-8fc8-11ea-a618-0242ac110019 STEP: Creating the pod STEP: Updating configmap projected-configmap-test-upd-ee2a5fcf-8fc8-11ea-a618-0242ac110019 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 18:41:04.496: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-c6zqr" for this suite. May 6 18:41:26.516: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 18:41:26.537: INFO: namespace: e2e-tests-projected-c6zqr, resource: bindings, ignored listing per whitelist May 6 18:41:26.590: INFO: namespace e2e-tests-projected-c6zqr deletion completed in 22.089929278s • [SLOW TEST:112.760 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run --rm job should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 18:41:26.591: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: executing a command with run --rm and attach with stdin May 6 18:41:26.683: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=e2e-tests-kubectl-67c2s run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed'' May 6 18:41:32.185: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\nI0506 18:41:32.119387 2816 log.go:172] (0xc000138790) (0xc000742140) Create stream\nI0506 18:41:32.119423 2816 log.go:172] (0xc000138790) (0xc000742140) Stream added, broadcasting: 1\nI0506 18:41:32.122167 2816 log.go:172] (0xc000138790) Reply frame received for 1\nI0506 18:41:32.122209 2816 log.go:172] (0xc000138790) (0xc0005fcdc0) Create stream\nI0506 18:41:32.122222 2816 log.go:172] (0xc000138790) (0xc0005fcdc0) Stream added, broadcasting: 3\nI0506 18:41:32.123074 2816 log.go:172] (0xc000138790) Reply frame received for 3\nI0506 18:41:32.123140 2816 log.go:172] (0xc000138790) (0xc0007028c0) Create stream\nI0506 18:41:32.123159 2816 log.go:172] (0xc000138790) (0xc0007028c0) Stream added, broadcasting: 5\nI0506 18:41:32.124016 2816 log.go:172] (0xc000138790) Reply frame received for 5\nI0506 18:41:32.124053 2816 log.go:172] (0xc000138790) (0xc0007421e0) Create stream\nI0506 18:41:32.124063 2816 log.go:172] (0xc000138790) (0xc0007421e0) Stream added, broadcasting: 7\nI0506 18:41:32.124892 2816 log.go:172] (0xc000138790) Reply frame received for 7\nI0506 18:41:32.125089 2816 log.go:172] (0xc0005fcdc0) (3) Writing data frame\nI0506 18:41:32.125340 2816 log.go:172] (0xc0005fcdc0) (3) Writing data frame\nI0506 18:41:32.126401 2816 log.go:172] (0xc000138790) Data frame received for 5\nI0506 18:41:32.126420 2816 log.go:172] (0xc0007028c0) (5) Data frame handling\nI0506 18:41:32.126435 2816 log.go:172] (0xc0007028c0) (5) Data frame sent\nI0506 18:41:32.126994 2816 log.go:172] (0xc000138790) Data frame received for 5\nI0506 18:41:32.127013 2816 log.go:172] (0xc0007028c0) (5) Data frame handling\nI0506 18:41:32.127028 2816 log.go:172] (0xc0007028c0) (5) Data frame sent\nI0506 18:41:32.162490 2816 log.go:172] (0xc000138790) Data frame received for 7\nI0506 18:41:32.162536 2816 log.go:172] (0xc0007421e0) (7) Data frame handling\nI0506 18:41:32.162581 2816 log.go:172] (0xc000138790) Data frame received for 5\nI0506 18:41:32.162625 2816 log.go:172] (0xc0007028c0) (5) Data frame handling\nI0506 18:41:32.162915 2816 log.go:172] (0xc000138790) Data frame received for 1\nI0506 18:41:32.162936 2816 log.go:172] (0xc000742140) (1) Data frame handling\nI0506 18:41:32.162960 2816 log.go:172] (0xc000742140) (1) Data frame sent\nI0506 18:41:32.163094 2816 log.go:172] (0xc000138790) (0xc000742140) Stream removed, broadcasting: 1\nI0506 18:41:32.163173 2816 log.go:172] (0xc000138790) (0xc000742140) Stream removed, broadcasting: 1\nI0506 18:41:32.163204 2816 log.go:172] (0xc000138790) (0xc0005fcdc0) Stream removed, broadcasting: 3\nI0506 18:41:32.163238 2816 log.go:172] (0xc000138790) (0xc0007028c0) Stream removed, broadcasting: 5\nI0506 18:41:32.163305 2816 log.go:172] (0xc000138790) (0xc0005fcdc0) Stream removed, broadcasting: 3\nI0506 18:41:32.163364 2816 log.go:172] (0xc000138790) Go away received\nI0506 18:41:32.163509 2816 log.go:172] (0xc000138790) (0xc0007421e0) Stream removed, broadcasting: 7\n" May 6 18:41:32.185: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n" STEP: verifying the job e2e-test-rm-busybox-job was deleted [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 18:41:34.190: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-67c2s" for this suite. May 6 18:41:40.223: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 18:41:40.237: INFO: namespace: e2e-tests-kubectl-67c2s, resource: bindings, ignored listing per whitelist May 6 18:41:40.342: INFO: namespace e2e-tests-kubectl-67c2s deletion completed in 6.149830926s • [SLOW TEST:13.752 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run --rm job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 18:41:40.343: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 6 18:42:06.472: INFO: Container started at 2020-05-06 18:41:43 +0000 UTC, pod became ready at 2020-05-06 18:42:06 +0000 UTC [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 18:42:06.472: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-xv6bm" for this suite. May 6 18:42:28.543: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 18:42:28.560: INFO: namespace: e2e-tests-container-probe-xv6bm, resource: bindings, ignored listing per whitelist May 6 18:42:28.605: INFO: namespace e2e-tests-container-probe-xv6bm deletion completed in 22.129010961s • [SLOW TEST:48.263 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 18:42:28.606: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:295 [It] should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a replication controller May 6 18:42:28.701: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-zlrvm' May 6 18:42:29.317: INFO: stderr: "" May 6 18:42:29.317: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. May 6 18:42:29.317: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-zlrvm' May 6 18:42:29.481: INFO: stderr: "" May 6 18:42:29.481: INFO: stdout: "update-demo-nautilus-trtqk update-demo-nautilus-wn6q6 " May 6 18:42:29.481: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-trtqk -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-zlrvm' May 6 18:42:29.580: INFO: stderr: "" May 6 18:42:29.580: INFO: stdout: "" May 6 18:42:29.580: INFO: update-demo-nautilus-trtqk is created but not running May 6 18:42:34.581: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-zlrvm' May 6 18:42:34.766: INFO: stderr: "" May 6 18:42:34.766: INFO: stdout: "update-demo-nautilus-trtqk update-demo-nautilus-wn6q6 " May 6 18:42:34.767: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-trtqk -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-zlrvm' May 6 18:42:34.876: INFO: stderr: "" May 6 18:42:34.876: INFO: stdout: "true" May 6 18:42:34.876: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-trtqk -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-zlrvm' May 6 18:42:34.973: INFO: stderr: "" May 6 18:42:34.973: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 6 18:42:34.973: INFO: validating pod update-demo-nautilus-trtqk May 6 18:42:34.977: INFO: got data: { "image": "nautilus.jpg" } May 6 18:42:34.977: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 6 18:42:34.977: INFO: update-demo-nautilus-trtqk is verified up and running May 6 18:42:34.977: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-wn6q6 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-zlrvm' May 6 18:42:35.063: INFO: stderr: "" May 6 18:42:35.064: INFO: stdout: "true" May 6 18:42:35.064: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-wn6q6 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-zlrvm' May 6 18:42:35.155: INFO: stderr: "" May 6 18:42:35.155: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 6 18:42:35.155: INFO: validating pod update-demo-nautilus-wn6q6 May 6 18:42:35.158: INFO: got data: { "image": "nautilus.jpg" } May 6 18:42:35.158: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 6 18:42:35.158: INFO: update-demo-nautilus-wn6q6 is verified up and running STEP: using delete to clean up resources May 6 18:42:35.158: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-zlrvm' May 6 18:42:35.350: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 6 18:42:35.350: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" May 6 18:42:35.350: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=e2e-tests-kubectl-zlrvm' May 6 18:42:35.819: INFO: stderr: "No resources found.\n" May 6 18:42:35.819: INFO: stdout: "" May 6 18:42:35.819: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=e2e-tests-kubectl-zlrvm -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 6 18:42:36.277: INFO: stderr: "" May 6 18:42:36.277: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 18:42:36.277: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-zlrvm" for this suite. May 6 18:42:44.826: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 18:42:44.858: INFO: namespace: e2e-tests-kubectl-zlrvm, resource: bindings, ignored listing per whitelist May 6 18:42:44.894: INFO: namespace e2e-tests-kubectl-zlrvm deletion completed in 8.27331435s • [SLOW TEST:16.288 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 18:42:44.894: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the rs STEP: Gathering metrics W0506 18:43:16.060991 6 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 6 18:43:16.061: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 18:43:16.061: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-ql7p5" for this suite. May 6 18:43:22.158: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 18:43:22.184: INFO: namespace: e2e-tests-gc-ql7p5, resource: bindings, ignored listing per whitelist May 6 18:43:22.428: INFO: namespace e2e-tests-gc-ql7p5 deletion completed in 6.363104766s • [SLOW TEST:37.534 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [k8s.io] Pods should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 18:43:22.428: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod May 6 18:43:27.219: INFO: Successfully updated pod "pod-update-766693ca-8fc9-11ea-a618-0242ac110019" STEP: verifying the updated pod is in kubernetes May 6 18:43:27.224: INFO: Pod update OK [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 18:43:27.224: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-n2q6f" for this suite. May 6 18:43:49.239: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 18:43:49.285: INFO: namespace: e2e-tests-pods-n2q6f, resource: bindings, ignored listing per whitelist May 6 18:43:49.345: INFO: namespace e2e-tests-pods-n2q6f deletion completed in 22.118927634s • [SLOW TEST:26.918 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 18:43:49.346: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating the pod May 6 18:43:54.291: INFO: Successfully updated pod "annotationupdate866d3450-8fc9-11ea-a618-0242ac110019" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 18:43:56.307: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-4xr6n" for this suite. May 6 18:44:20.458: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 18:44:20.523: INFO: namespace: e2e-tests-projected-4xr6n, resource: bindings, ignored listing per whitelist May 6 18:44:20.537: INFO: namespace e2e-tests-projected-4xr6n deletion completed in 24.225465104s • [SLOW TEST:31.191 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] Downward API volume should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 18:44:20.537: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin May 6 18:44:20.646: INFO: Waiting up to 5m0s for pod "downwardapi-volume-98ff3708-8fc9-11ea-a618-0242ac110019" in namespace "e2e-tests-downward-api-plmvd" to be "success or failure" May 6 18:44:20.656: INFO: Pod "downwardapi-volume-98ff3708-8fc9-11ea-a618-0242ac110019": Phase="Pending", Reason="", readiness=false. Elapsed: 10.253308ms May 6 18:44:22.661: INFO: Pod "downwardapi-volume-98ff3708-8fc9-11ea-a618-0242ac110019": Phase="Pending", Reason="", readiness=false. Elapsed: 2.0147296s May 6 18:44:24.854: INFO: Pod "downwardapi-volume-98ff3708-8fc9-11ea-a618-0242ac110019": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.208406797s STEP: Saw pod success May 6 18:44:24.854: INFO: Pod "downwardapi-volume-98ff3708-8fc9-11ea-a618-0242ac110019" satisfied condition "success or failure" May 6 18:44:24.858: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-98ff3708-8fc9-11ea-a618-0242ac110019 container client-container: STEP: delete the pod May 6 18:44:25.299: INFO: Waiting for pod downwardapi-volume-98ff3708-8fc9-11ea-a618-0242ac110019 to disappear May 6 18:44:25.387: INFO: Pod downwardapi-volume-98ff3708-8fc9-11ea-a618-0242ac110019 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 18:44:25.387: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-plmvd" for this suite. May 6 18:44:31.770: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 18:44:31.852: INFO: namespace: e2e-tests-downward-api-plmvd, resource: bindings, ignored listing per whitelist May 6 18:44:31.855: INFO: namespace e2e-tests-downward-api-plmvd deletion completed in 6.464989706s • [SLOW TEST:11.318 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 18:44:31.856: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Cleaning up the secret STEP: Cleaning up the configmap STEP: Cleaning up the pod [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 18:44:36.519: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-wrapper-r4rgm" for this suite. May 6 18:44:42.606: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 18:44:42.652: INFO: namespace: e2e-tests-emptydir-wrapper-r4rgm, resource: bindings, ignored listing per whitelist May 6 18:44:42.673: INFO: namespace e2e-tests-emptydir-wrapper-r4rgm deletion completed in 6.13462709s • [SLOW TEST:10.817 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 18:44:42.673: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin May 6 18:44:42.977: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a63b5c1b-8fc9-11ea-a618-0242ac110019" in namespace "e2e-tests-projected-2wsbd" to be "success or failure" May 6 18:44:42.992: INFO: Pod "downwardapi-volume-a63b5c1b-8fc9-11ea-a618-0242ac110019": Phase="Pending", Reason="", readiness=false. Elapsed: 15.220049ms May 6 18:44:45.442: INFO: Pod "downwardapi-volume-a63b5c1b-8fc9-11ea-a618-0242ac110019": Phase="Pending", Reason="", readiness=false. Elapsed: 2.465037532s May 6 18:44:47.445: INFO: Pod "downwardapi-volume-a63b5c1b-8fc9-11ea-a618-0242ac110019": Phase="Pending", Reason="", readiness=false. Elapsed: 4.468139996s May 6 18:44:49.449: INFO: Pod "downwardapi-volume-a63b5c1b-8fc9-11ea-a618-0242ac110019": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.472193825s STEP: Saw pod success May 6 18:44:49.449: INFO: Pod "downwardapi-volume-a63b5c1b-8fc9-11ea-a618-0242ac110019" satisfied condition "success or failure" May 6 18:44:49.452: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-a63b5c1b-8fc9-11ea-a618-0242ac110019 container client-container: STEP: delete the pod May 6 18:44:49.653: INFO: Waiting for pod downwardapi-volume-a63b5c1b-8fc9-11ea-a618-0242ac110019 to disappear May 6 18:44:49.669: INFO: Pod downwardapi-volume-a63b5c1b-8fc9-11ea-a618-0242ac110019 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 18:44:49.669: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-2wsbd" for this suite. May 6 18:44:57.744: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 18:44:57.772: INFO: namespace: e2e-tests-projected-2wsbd, resource: bindings, ignored listing per whitelist May 6 18:44:57.803: INFO: namespace e2e-tests-projected-2wsbd deletion completed in 8.130876313s • [SLOW TEST:15.130 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 18:44:57.803: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should not write to root filesystem [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 18:45:02.007: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubelet-test-sh6ps" for this suite. May 6 18:45:54.044: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 18:45:54.125: INFO: namespace: e2e-tests-kubelet-test-sh6ps, resource: bindings, ignored listing per whitelist May 6 18:45:54.140: INFO: namespace e2e-tests-kubelet-test-sh6ps deletion completed in 52.128932418s • [SLOW TEST:56.337 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when scheduling a read only busybox container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:186 should not write to root filesystem [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 18:45:54.141: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name s-test-opt-del-d0dc970d-8fc9-11ea-a618-0242ac110019 STEP: Creating secret with name s-test-opt-upd-d0dc9766-8fc9-11ea-a618-0242ac110019 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-d0dc970d-8fc9-11ea-a618-0242ac110019 STEP: Updating secret s-test-opt-upd-d0dc9766-8fc9-11ea-a618-0242ac110019 STEP: Creating secret with name s-test-opt-create-d0dc9780-8fc9-11ea-a618-0242ac110019 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 18:47:27.133: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-62nv8" for this suite. May 6 18:47:51.191: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 18:47:51.299: INFO: namespace: e2e-tests-projected-62nv8, resource: bindings, ignored listing per whitelist May 6 18:47:51.318: INFO: namespace e2e-tests-projected-62nv8 deletion completed in 24.180849189s • [SLOW TEST:117.177 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 18:47:51.318: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-map-16a0a8f6-8fca-11ea-a618-0242ac110019 STEP: Creating a pod to test consume configMaps May 6 18:47:51.495: INFO: Waiting up to 5m0s for pod "pod-configmaps-16a3b8f9-8fca-11ea-a618-0242ac110019" in namespace "e2e-tests-configmap-dp9mx" to be "success or failure" May 6 18:47:51.505: INFO: Pod "pod-configmaps-16a3b8f9-8fca-11ea-a618-0242ac110019": Phase="Pending", Reason="", readiness=false. Elapsed: 10.352933ms May 6 18:47:53.510: INFO: Pod "pod-configmaps-16a3b8f9-8fca-11ea-a618-0242ac110019": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015420098s May 6 18:47:55.538: INFO: Pod "pod-configmaps-16a3b8f9-8fca-11ea-a618-0242ac110019": Phase="Running", Reason="", readiness=true. Elapsed: 4.042765842s May 6 18:47:57.542: INFO: Pod "pod-configmaps-16a3b8f9-8fca-11ea-a618-0242ac110019": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.047147258s STEP: Saw pod success May 6 18:47:57.542: INFO: Pod "pod-configmaps-16a3b8f9-8fca-11ea-a618-0242ac110019" satisfied condition "success or failure" May 6 18:47:57.545: INFO: Trying to get logs from node hunter-worker2 pod pod-configmaps-16a3b8f9-8fca-11ea-a618-0242ac110019 container configmap-volume-test: STEP: delete the pod May 6 18:47:57.602: INFO: Waiting for pod pod-configmaps-16a3b8f9-8fca-11ea-a618-0242ac110019 to disappear May 6 18:47:57.631: INFO: Pod pod-configmaps-16a3b8f9-8fca-11ea-a618-0242ac110019 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 18:47:57.631: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-dp9mx" for this suite. May 6 18:48:03.654: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 18:48:03.762: INFO: namespace: e2e-tests-configmap-dp9mx, resource: bindings, ignored listing per whitelist May 6 18:48:03.784: INFO: namespace e2e-tests-configmap-dp9mx deletion completed in 6.149402931s • [SLOW TEST:12.466 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 18:48:03.784: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating secret e2e-tests-secrets-qdqtk/secret-test-1e151c74-8fca-11ea-a618-0242ac110019 STEP: Creating a pod to test consume secrets May 6 18:48:03.929: INFO: Waiting up to 5m0s for pod "pod-configmaps-1e171425-8fca-11ea-a618-0242ac110019" in namespace "e2e-tests-secrets-qdqtk" to be "success or failure" May 6 18:48:03.933: INFO: Pod "pod-configmaps-1e171425-8fca-11ea-a618-0242ac110019": Phase="Pending", Reason="", readiness=false. Elapsed: 4.258121ms May 6 18:48:06.085: INFO: Pod "pod-configmaps-1e171425-8fca-11ea-a618-0242ac110019": Phase="Pending", Reason="", readiness=false. Elapsed: 2.156559956s May 6 18:48:08.090: INFO: Pod "pod-configmaps-1e171425-8fca-11ea-a618-0242ac110019": Phase="Pending", Reason="", readiness=false. Elapsed: 4.161318212s May 6 18:48:10.094: INFO: Pod "pod-configmaps-1e171425-8fca-11ea-a618-0242ac110019": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.165017879s STEP: Saw pod success May 6 18:48:10.094: INFO: Pod "pod-configmaps-1e171425-8fca-11ea-a618-0242ac110019" satisfied condition "success or failure" May 6 18:48:10.096: INFO: Trying to get logs from node hunter-worker pod pod-configmaps-1e171425-8fca-11ea-a618-0242ac110019 container env-test: STEP: delete the pod May 6 18:48:10.303: INFO: Waiting for pod pod-configmaps-1e171425-8fca-11ea-a618-0242ac110019 to disappear May 6 18:48:10.340: INFO: Pod pod-configmaps-1e171425-8fca-11ea-a618-0242ac110019 no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 18:48:10.340: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-qdqtk" for this suite. May 6 18:48:16.365: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 18:48:16.423: INFO: namespace: e2e-tests-secrets-qdqtk, resource: bindings, ignored listing per whitelist May 6 18:48:16.434: INFO: namespace e2e-tests-secrets-qdqtk deletion completed in 6.090667643s • [SLOW TEST:12.650 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:32 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 18:48:16.435: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook May 6 18:48:24.869: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 6 18:48:24.895: INFO: Pod pod-with-poststart-http-hook still exists May 6 18:48:26.896: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 6 18:48:26.899: INFO: Pod pod-with-poststart-http-hook still exists May 6 18:48:28.896: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 6 18:48:28.901: INFO: Pod pod-with-poststart-http-hook still exists May 6 18:48:30.896: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 6 18:48:30.900: INFO: Pod pod-with-poststart-http-hook still exists May 6 18:48:32.896: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 6 18:48:32.899: INFO: Pod pod-with-poststart-http-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 18:48:32.899: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-gm52v" for this suite. May 6 18:48:58.919: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 18:48:58.956: INFO: namespace: e2e-tests-container-lifecycle-hook-gm52v, resource: bindings, ignored listing per whitelist May 6 18:48:58.992: INFO: namespace e2e-tests-container-lifecycle-hook-gm52v deletion completed in 26.089197581s • [SLOW TEST:42.557 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40 should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 18:48:58.992: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65 [It] RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 6 18:48:59.089: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted) May 6 18:48:59.134: INFO: Pod name sample-pod: Found 0 pods out of 1 May 6 18:49:04.138: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running May 6 18:49:04.138: INFO: Creating deployment "test-rolling-update-deployment" May 6 18:49:04.143: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has May 6 18:49:04.400: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created May 6 18:49:06.476: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected May 6 18:49:06.479: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724387745, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724387745, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724387745, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724387745, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)} May 6 18:49:08.488: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724387745, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724387745, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724387745, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724387745, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)} May 6 18:49:10.483: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted) [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59 May 6 18:49:10.492: INFO: Deployment "test-rolling-update-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment,GenerateName:,Namespace:e2e-tests-deployment-75htd,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-75htd/deployments/test-rolling-update-deployment,UID:41fbb83f-8fca-11ea-99e8-0242ac110002,ResourceVersion:9100132,Generation:1,CreationTimestamp:2020-05-06 18:49:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-05-06 18:49:05 +0000 UTC 2020-05-06 18:49:05 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-05-06 18:49:09 +0000 UTC 2020-05-06 18:49:05 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rolling-update-deployment-75db98fb4c" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},} May 6 18:49:10.495: INFO: New ReplicaSet "test-rolling-update-deployment-75db98fb4c" of Deployment "test-rolling-update-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-75db98fb4c,GenerateName:,Namespace:e2e-tests-deployment-75htd,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-75htd/replicasets/test-rolling-update-deployment-75db98fb4c,UID:42249acf-8fca-11ea-99e8-0242ac110002,ResourceVersion:9100123,Generation:1,CreationTimestamp:2020-05-06 18:49:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment 41fbb83f-8fca-11ea-99e8-0242ac110002 0xc001904bf7 0xc001904bf8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} May 6 18:49:10.495: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment": May 6 18:49:10.495: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-controller,GenerateName:,Namespace:e2e-tests-deployment-75htd,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-75htd/replicasets/test-rolling-update-controller,UID:3ef93abb-8fca-11ea-99e8-0242ac110002,ResourceVersion:9100131,Generation:2,CreationTimestamp:2020-05-06 18:48:59 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305832,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment 41fbb83f-8fca-11ea-99e8-0242ac110002 0xc001904b17 0xc001904b18}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} May 6 18:49:10.498: INFO: Pod "test-rolling-update-deployment-75db98fb4c-sc299" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-75db98fb4c-sc299,GenerateName:test-rolling-update-deployment-75db98fb4c-,Namespace:e2e-tests-deployment-75htd,SelfLink:/api/v1/namespaces/e2e-tests-deployment-75htd/pods/test-rolling-update-deployment-75db98fb4c-sc299,UID:42c23444-8fca-11ea-99e8-0242ac110002,ResourceVersion:9100122,Generation:0,CreationTimestamp:2020-05-06 18:49:05 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rolling-update-deployment-75db98fb4c 42249acf-8fca-11ea-99e8-0242ac110002 0xc0025add77 0xc0025add78}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-v7hjg {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-v7hjg,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-v7hjg true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0025ade00} {node.kubernetes.io/unreachable Exists NoExecute 0xc0025ade20}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 18:49:05 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 18:49:08 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 18:49:08 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 18:49:05 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.4,PodIP:10.244.2.105,StartTime:2020-05-06 18:49:05 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-05-06 18:49:08 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 containerd://d1953a3b9efcdb9ccb93e773598b4fbce0fa42e50a295952a7d22dfc3d445887}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 18:49:10.498: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-deployment-75htd" for this suite. May 6 18:49:20.627: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 18:49:20.634: INFO: namespace: e2e-tests-deployment-75htd, resource: bindings, ignored listing per whitelist May 6 18:49:20.694: INFO: namespace e2e-tests-deployment-75htd deletion completed in 10.193176322s • [SLOW TEST:21.701 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 18:49:20.694: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the rc STEP: delete the rc STEP: wait for all pods to be garbage collected STEP: Gathering metrics W0506 18:49:31.278937 6 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 6 18:49:31.278: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 18:49:31.279: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-ftmv7" for this suite. May 6 18:49:37.316: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 18:49:37.349: INFO: namespace: e2e-tests-gc-ftmv7, resource: bindings, ignored listing per whitelist May 6 18:49:37.399: INFO: namespace e2e-tests-gc-ftmv7 deletion completed in 6.11748004s • [SLOW TEST:16.705 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run rc should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 18:49:37.399: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1298 [It] should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine May 6 18:49:37.508: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=e2e-tests-kubectl-sqctp' May 6 18:49:37.699: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" May 6 18:49:37.700: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n" STEP: verifying the rc e2e-test-nginx-rc was created STEP: verifying the pod controlled by rc e2e-test-nginx-rc was created STEP: confirm that you can get logs from an rc May 6 18:49:37.744: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-nginx-rc-9f6dq] May 6 18:49:37.745: INFO: Waiting up to 5m0s for pod "e2e-test-nginx-rc-9f6dq" in namespace "e2e-tests-kubectl-sqctp" to be "running and ready" May 6 18:49:37.811: INFO: Pod "e2e-test-nginx-rc-9f6dq": Phase="Pending", Reason="", readiness=false. Elapsed: 66.259331ms May 6 18:49:39.815: INFO: Pod "e2e-test-nginx-rc-9f6dq": Phase="Pending", Reason="", readiness=false. Elapsed: 2.07064499s May 6 18:49:41.818: INFO: Pod "e2e-test-nginx-rc-9f6dq": Phase="Running", Reason="", readiness=true. Elapsed: 4.073771973s May 6 18:49:41.818: INFO: Pod "e2e-test-nginx-rc-9f6dq" satisfied condition "running and ready" May 6 18:49:41.818: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-nginx-rc-9f6dq] May 6 18:49:41.818: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs rc/e2e-test-nginx-rc --namespace=e2e-tests-kubectl-sqctp' May 6 18:49:41.971: INFO: stderr: "" May 6 18:49:41.971: INFO: stdout: "" [AfterEach] [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1303 May 6 18:49:41.971: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=e2e-tests-kubectl-sqctp' May 6 18:49:42.097: INFO: stderr: "" May 6 18:49:42.097: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 18:49:42.097: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-sqctp" for this suite. May 6 18:50:06.131: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 18:50:06.151: INFO: namespace: e2e-tests-kubectl-sqctp, resource: bindings, ignored listing per whitelist May 6 18:50:06.184: INFO: namespace e2e-tests-kubectl-sqctp deletion completed in 24.082973123s • [SLOW TEST:28.785 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 18:50:06.184: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name projected-secret-test-672ceacd-8fca-11ea-a618-0242ac110019 STEP: Creating a pod to test consume secrets May 6 18:50:06.555: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-672d6e7e-8fca-11ea-a618-0242ac110019" in namespace "e2e-tests-projected-5p77m" to be "success or failure" May 6 18:50:06.571: INFO: Pod "pod-projected-secrets-672d6e7e-8fca-11ea-a618-0242ac110019": Phase="Pending", Reason="", readiness=false. Elapsed: 15.58691ms May 6 18:50:08.576: INFO: Pod "pod-projected-secrets-672d6e7e-8fca-11ea-a618-0242ac110019": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020479775s May 6 18:50:10.581: INFO: Pod "pod-projected-secrets-672d6e7e-8fca-11ea-a618-0242ac110019": Phase="Running", Reason="", readiness=true. Elapsed: 4.02561511s May 6 18:50:12.584: INFO: Pod "pod-projected-secrets-672d6e7e-8fca-11ea-a618-0242ac110019": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.028665101s STEP: Saw pod success May 6 18:50:12.584: INFO: Pod "pod-projected-secrets-672d6e7e-8fca-11ea-a618-0242ac110019" satisfied condition "success or failure" May 6 18:50:12.587: INFO: Trying to get logs from node hunter-worker2 pod pod-projected-secrets-672d6e7e-8fca-11ea-a618-0242ac110019 container secret-volume-test: STEP: delete the pod May 6 18:50:12.646: INFO: Waiting for pod pod-projected-secrets-672d6e7e-8fca-11ea-a618-0242ac110019 to disappear May 6 18:50:12.661: INFO: Pod pod-projected-secrets-672d6e7e-8fca-11ea-a618-0242ac110019 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 18:50:12.661: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-5p77m" for this suite. May 6 18:50:18.670: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 18:50:18.708: INFO: namespace: e2e-tests-projected-5p77m, resource: bindings, ignored listing per whitelist May 6 18:50:18.740: INFO: namespace e2e-tests-projected-5p77m deletion completed in 6.077350282s • [SLOW TEST:12.556 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 18:50:18.741: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin May 6 18:50:18.885: INFO: Waiting up to 5m0s for pod "downwardapi-volume-6e871936-8fca-11ea-a618-0242ac110019" in namespace "e2e-tests-projected-h7hgh" to be "success or failure" May 6 18:50:18.960: INFO: Pod "downwardapi-volume-6e871936-8fca-11ea-a618-0242ac110019": Phase="Pending", Reason="", readiness=false. Elapsed: 74.872744ms May 6 18:50:21.021: INFO: Pod "downwardapi-volume-6e871936-8fca-11ea-a618-0242ac110019": Phase="Pending", Reason="", readiness=false. Elapsed: 2.135982391s May 6 18:50:23.911: INFO: Pod "downwardapi-volume-6e871936-8fca-11ea-a618-0242ac110019": Phase="Running", Reason="", readiness=true. Elapsed: 5.025820309s May 6 18:50:25.916: INFO: Pod "downwardapi-volume-6e871936-8fca-11ea-a618-0242ac110019": Phase="Succeeded", Reason="", readiness=false. Elapsed: 7.030527722s STEP: Saw pod success May 6 18:50:25.916: INFO: Pod "downwardapi-volume-6e871936-8fca-11ea-a618-0242ac110019" satisfied condition "success or failure" May 6 18:50:25.928: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-6e871936-8fca-11ea-a618-0242ac110019 container client-container: STEP: delete the pod May 6 18:50:25.993: INFO: Waiting for pod downwardapi-volume-6e871936-8fca-11ea-a618-0242ac110019 to disappear May 6 18:50:26.034: INFO: Pod downwardapi-volume-6e871936-8fca-11ea-a618-0242ac110019 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 18:50:26.034: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-h7hgh" for this suite. May 6 18:50:32.306: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 18:50:32.321: INFO: namespace: e2e-tests-projected-h7hgh, resource: bindings, ignored listing per whitelist May 6 18:50:32.368: INFO: namespace e2e-tests-projected-h7hgh deletion completed in 6.33187333s • [SLOW TEST:13.628 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 18:50:32.369: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook May 6 18:50:47.326: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 6 18:50:47.640: INFO: Pod pod-with-prestop-http-hook still exists May 6 18:50:49.641: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 6 18:50:49.716: INFO: Pod pod-with-prestop-http-hook still exists May 6 18:50:51.641: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 6 18:50:51.645: INFO: Pod pod-with-prestop-http-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 18:50:51.653: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-gj7t2" for this suite. May 6 18:51:15.672: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 18:51:15.718: INFO: namespace: e2e-tests-container-lifecycle-hook-gj7t2, resource: bindings, ignored listing per whitelist May 6 18:51:15.748: INFO: namespace e2e-tests-container-lifecycle-hook-gj7t2 deletion completed in 24.09132555s • [SLOW TEST:43.380 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40 should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl replace should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 18:51:15.750: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1563 [It] should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine May 6 18:51:15.914: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --labels=run=e2e-test-nginx-pod --namespace=e2e-tests-kubectl-k77vb' May 6 18:51:16.020: INFO: stderr: "" May 6 18:51:16.020: INFO: stdout: "pod/e2e-test-nginx-pod created\n" STEP: verifying the pod e2e-test-nginx-pod is running STEP: verifying the pod e2e-test-nginx-pod was created May 6 18:51:21.071: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod e2e-test-nginx-pod --namespace=e2e-tests-kubectl-k77vb -o json' May 6 18:51:21.390: INFO: stderr: "" May 6 18:51:21.390: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"creationTimestamp\": \"2020-05-06T18:51:16Z\",\n \"labels\": {\n \"run\": \"e2e-test-nginx-pod\"\n },\n \"name\": \"e2e-test-nginx-pod\",\n \"namespace\": \"e2e-tests-kubectl-k77vb\",\n \"resourceVersion\": \"9100588\",\n \"selfLink\": \"/api/v1/namespaces/e2e-tests-kubectl-k77vb/pods/e2e-test-nginx-pod\",\n \"uid\": \"909509e4-8fca-11ea-99e8-0242ac110002\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"docker.io/library/nginx:1.14-alpine\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-nginx-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"default-token-54qbv\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"hunter-worker\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"default-token-54qbv\",\n \"secret\": {\n \"defaultMode\": 420,\n \"secretName\": \"default-token-54qbv\"\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-05-06T18:51:16Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-05-06T18:51:19Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-05-06T18:51:19Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-05-06T18:51:16Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"containerd://57cdbab2d4c7f65b99552e20c62a5717b60aaa13e13d90c3ad517ca105eda50a\",\n \"image\": \"docker.io/library/nginx:1.14-alpine\",\n \"imageID\": \"docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7\",\n \"lastState\": {},\n \"name\": \"e2e-test-nginx-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2020-05-06T18:51:18Z\"\n }\n }\n }\n ],\n \"hostIP\": \"172.17.0.3\",\n \"phase\": \"Running\",\n \"podIP\": \"10.244.1.141\",\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2020-05-06T18:51:16Z\"\n }\n}\n" STEP: replace the image in the pod May 6 18:51:21.390: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config replace -f - --namespace=e2e-tests-kubectl-k77vb' May 6 18:51:21.799: INFO: stderr: "" May 6 18:51:21.799: INFO: stdout: "pod/e2e-test-nginx-pod replaced\n" STEP: verifying the pod e2e-test-nginx-pod has the right image docker.io/library/busybox:1.29 [AfterEach] [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1568 May 6 18:51:21.844: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=e2e-tests-kubectl-k77vb' May 6 18:51:26.085: INFO: stderr: "" May 6 18:51:26.085: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 18:51:26.085: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-k77vb" for this suite. May 6 18:51:34.250: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 18:51:34.283: INFO: namespace: e2e-tests-kubectl-k77vb, resource: bindings, ignored listing per whitelist May 6 18:51:34.315: INFO: namespace e2e-tests-kubectl-k77vb deletion completed in 8.220428616s • [SLOW TEST:18.566 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 18:51:34.316: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test override command May 6 18:51:34.550: INFO: Waiting up to 5m0s for pod "client-containers-9ba0e737-8fca-11ea-a618-0242ac110019" in namespace "e2e-tests-containers-9kwvb" to be "success or failure" May 6 18:51:34.564: INFO: Pod "client-containers-9ba0e737-8fca-11ea-a618-0242ac110019": Phase="Pending", Reason="", readiness=false. Elapsed: 14.779251ms May 6 18:51:36.569: INFO: Pod "client-containers-9ba0e737-8fca-11ea-a618-0242ac110019": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018905562s May 6 18:51:38.573: INFO: Pod "client-containers-9ba0e737-8fca-11ea-a618-0242ac110019": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.023226917s STEP: Saw pod success May 6 18:51:38.573: INFO: Pod "client-containers-9ba0e737-8fca-11ea-a618-0242ac110019" satisfied condition "success or failure" May 6 18:51:38.576: INFO: Trying to get logs from node hunter-worker2 pod client-containers-9ba0e737-8fca-11ea-a618-0242ac110019 container test-container: STEP: delete the pod May 6 18:51:38.636: INFO: Waiting for pod client-containers-9ba0e737-8fca-11ea-a618-0242ac110019 to disappear May 6 18:51:38.912: INFO: Pod client-containers-9ba0e737-8fca-11ea-a618-0242ac110019 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 18:51:38.913: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-containers-9kwvb" for this suite. May 6 18:51:44.939: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 18:51:44.987: INFO: namespace: e2e-tests-containers-9kwvb, resource: bindings, ignored listing per whitelist May 6 18:51:45.022: INFO: namespace e2e-tests-containers-9kwvb deletion completed in 6.104334217s • [SLOW TEST:10.706 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 18:51:45.022: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a service in the namespace STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there is no service in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 18:51:52.422: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-namespaces-9qlmn" for this suite. May 6 18:51:58.482: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 18:51:58.543: INFO: namespace: e2e-tests-namespaces-9qlmn, resource: bindings, ignored listing per whitelist May 6 18:51:58.558: INFO: namespace e2e-tests-namespaces-9qlmn deletion completed in 6.134152028s STEP: Destroying namespace "e2e-tests-nsdeletetest-nqsvp" for this suite. May 6 18:51:58.561: INFO: Namespace e2e-tests-nsdeletetest-nqsvp was already deleted STEP: Destroying namespace "e2e-tests-nsdeletetest-z2jz6" for this suite. May 6 18:52:06.578: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 18:52:06.643: INFO: namespace: e2e-tests-nsdeletetest-z2jz6, resource: bindings, ignored listing per whitelist May 6 18:52:06.652: INFO: namespace e2e-tests-nsdeletetest-z2jz6 deletion completed in 8.091441292s • [SLOW TEST:21.631 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 18:52:06.653: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-af3134ff-8fca-11ea-a618-0242ac110019 STEP: Creating a pod to test consume secrets May 6 18:52:07.516: INFO: Waiting up to 5m0s for pod "pod-secrets-af369475-8fca-11ea-a618-0242ac110019" in namespace "e2e-tests-secrets-rm9qc" to be "success or failure" May 6 18:52:07.534: INFO: Pod "pod-secrets-af369475-8fca-11ea-a618-0242ac110019": Phase="Pending", Reason="", readiness=false. Elapsed: 17.727704ms May 6 18:52:09.558: INFO: Pod "pod-secrets-af369475-8fca-11ea-a618-0242ac110019": Phase="Pending", Reason="", readiness=false. Elapsed: 2.041915393s May 6 18:52:11.707: INFO: Pod "pod-secrets-af369475-8fca-11ea-a618-0242ac110019": Phase="Pending", Reason="", readiness=false. Elapsed: 4.19072312s May 6 18:52:13.711: INFO: Pod "pod-secrets-af369475-8fca-11ea-a618-0242ac110019": Phase="Running", Reason="", readiness=true. Elapsed: 6.194862639s May 6 18:52:15.715: INFO: Pod "pod-secrets-af369475-8fca-11ea-a618-0242ac110019": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.198715212s STEP: Saw pod success May 6 18:52:15.715: INFO: Pod "pod-secrets-af369475-8fca-11ea-a618-0242ac110019" satisfied condition "success or failure" May 6 18:52:15.717: INFO: Trying to get logs from node hunter-worker pod pod-secrets-af369475-8fca-11ea-a618-0242ac110019 container secret-volume-test: STEP: delete the pod May 6 18:52:16.205: INFO: Waiting for pod pod-secrets-af369475-8fca-11ea-a618-0242ac110019 to disappear May 6 18:52:16.216: INFO: Pod pod-secrets-af369475-8fca-11ea-a618-0242ac110019 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 18:52:16.216: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-rm9qc" for this suite. May 6 18:52:22.245: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 18:52:22.269: INFO: namespace: e2e-tests-secrets-rm9qc, resource: bindings, ignored listing per whitelist May 6 18:52:22.316: INFO: namespace e2e-tests-secrets-rm9qc deletion completed in 6.096707962s • [SLOW TEST:15.663 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 18:52:22.316: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 6 18:52:22.402: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 18:52:26.623: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-vrfr9" for this suite. May 6 18:53:14.642: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 18:53:14.659: INFO: namespace: e2e-tests-pods-vrfr9, resource: bindings, ignored listing per whitelist May 6 18:53:14.712: INFO: namespace e2e-tests-pods-vrfr9 deletion completed in 48.084823261s • [SLOW TEST:52.396 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 18:53:14.713: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 6 18:53:14.790: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version --client' May 6 18:53:14.860: INFO: stderr: "" May 6 18:53:14.860: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"13\", GitVersion:\"v1.13.12\", GitCommit:\"a8b52209ee172232b6db7a6e0ce2adc77458829f\", GitTreeState:\"clean\", BuildDate:\"2020-05-02T15:37:06Z\", GoVersion:\"go1.11.13\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" May 6 18:53:14.862: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-t9gbp' May 6 18:53:22.541: INFO: stderr: "" May 6 18:53:22.541: INFO: stdout: "replicationcontroller/redis-master created\n" May 6 18:53:22.541: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-t9gbp' May 6 18:53:23.430: INFO: stderr: "" May 6 18:53:23.430: INFO: stdout: "service/redis-master created\n" STEP: Waiting for Redis master to start. May 6 18:53:24.570: INFO: Selector matched 1 pods for map[app:redis] May 6 18:53:24.570: INFO: Found 0 / 1 May 6 18:53:25.558: INFO: Selector matched 1 pods for map[app:redis] May 6 18:53:25.559: INFO: Found 0 / 1 May 6 18:53:26.434: INFO: Selector matched 1 pods for map[app:redis] May 6 18:53:26.434: INFO: Found 0 / 1 May 6 18:53:27.663: INFO: Selector matched 1 pods for map[app:redis] May 6 18:53:27.663: INFO: Found 0 / 1 May 6 18:53:28.434: INFO: Selector matched 1 pods for map[app:redis] May 6 18:53:28.434: INFO: Found 1 / 1 May 6 18:53:28.434: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 May 6 18:53:28.437: INFO: Selector matched 1 pods for map[app:redis] May 6 18:53:28.438: INFO: ForEach: Found 1 pods from the filter. Now looping through them. May 6 18:53:28.438: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe pod redis-master-2xmk2 --namespace=e2e-tests-kubectl-t9gbp' May 6 18:53:28.813: INFO: stderr: "" May 6 18:53:28.813: INFO: stdout: "Name: redis-master-2xmk2\nNamespace: e2e-tests-kubectl-t9gbp\nPriority: 0\nPriorityClassName: \nNode: hunter-worker2/172.17.0.4\nStart Time: Wed, 06 May 2020 18:53:23 +0000\nLabels: app=redis\n role=master\nAnnotations: \nStatus: Running\nIP: 10.244.2.113\nControlled By: ReplicationController/redis-master\nContainers:\n redis-master:\n Container ID: containerd://bd18b09df6b78ef2a6c7d3a37dd28938699891d0ab95233f6e220ec30b233be4\n Image: gcr.io/kubernetes-e2e-test-images/redis:1.0\n Image ID: gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830\n Port: 6379/TCP\n Host Port: 0/TCP\n State: Running\n Started: Wed, 06 May 2020 18:53:27 +0000\n Ready: True\n Restart Count: 0\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from default-token-kpw9k (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n default-token-kpw9k:\n Type: Secret (a volume populated by a Secret)\n SecretName: default-token-kpw9k\n Optional: false\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute for 300s\n node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 6s default-scheduler Successfully assigned e2e-tests-kubectl-t9gbp/redis-master-2xmk2 to hunter-worker2\n Normal Pulled 4s kubelet, hunter-worker2 Container image \"gcr.io/kubernetes-e2e-test-images/redis:1.0\" already present on machine\n Normal Created 2s kubelet, hunter-worker2 Created container\n Normal Started 1s kubelet, hunter-worker2 Started container\n" May 6 18:53:28.814: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe rc redis-master --namespace=e2e-tests-kubectl-t9gbp' May 6 18:53:28.947: INFO: stderr: "" May 6 18:53:28.947: INFO: stdout: "Name: redis-master\nNamespace: e2e-tests-kubectl-t9gbp\nSelector: app=redis,role=master\nLabels: app=redis\n role=master\nAnnotations: \nReplicas: 1 current / 1 desired\nPods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: app=redis\n role=master\n Containers:\n redis-master:\n Image: gcr.io/kubernetes-e2e-test-images/redis:1.0\n Port: 6379/TCP\n Host Port: 0/TCP\n Environment: \n Mounts: \n Volumes: \nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 6s replication-controller Created pod: redis-master-2xmk2\n" May 6 18:53:28.947: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe service redis-master --namespace=e2e-tests-kubectl-t9gbp' May 6 18:53:29.055: INFO: stderr: "" May 6 18:53:29.055: INFO: stdout: "Name: redis-master\nNamespace: e2e-tests-kubectl-t9gbp\nLabels: app=redis\n role=master\nAnnotations: \nSelector: app=redis,role=master\nType: ClusterIP\nIP: 10.110.229.139\nPort: 6379/TCP\nTargetPort: redis-server/TCP\nEndpoints: 10.244.2.113:6379\nSession Affinity: None\nEvents: \n" May 6 18:53:29.059: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe node hunter-control-plane' May 6 18:53:29.187: INFO: stderr: "" May 6 18:53:29.187: INFO: stdout: "Name: hunter-control-plane\nRoles: master\nLabels: beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/os=linux\n kubernetes.io/hostname=hunter-control-plane\n node-role.kubernetes.io/master=\nAnnotations: kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock\n node.alpha.kubernetes.io/ttl: 0\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Sun, 15 Mar 2020 18:22:50 +0000\nTaints: node-role.kubernetes.io/master:NoSchedule\nUnschedulable: false\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n MemoryPressure False Wed, 06 May 2020 18:53:28 +0000 Sun, 15 Mar 2020 18:22:49 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Wed, 06 May 2020 18:53:28 +0000 Sun, 15 Mar 2020 18:22:49 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Wed, 06 May 2020 18:53:28 +0000 Sun, 15 Mar 2020 18:22:49 +0000 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Wed, 06 May 2020 18:53:28 +0000 Sun, 15 Mar 2020 18:23:41 +0000 KubeletReady kubelet is posting ready status\nAddresses:\n InternalIP: 172.17.0.2\n Hostname: hunter-control-plane\nCapacity:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759892Ki\n pods: 110\nAllocatable:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759892Ki\n pods: 110\nSystem Info:\n Machine ID: 3c4716968dac483293a23c2100ad64a5\n System UUID: 683417f7-64ca-431d-b8ac-22e73b26255e\n Boot ID: ca2aa731-f890-4956-92a1-ff8c7560d571\n Kernel Version: 4.15.0-88-generic\n OS Image: Ubuntu 19.10\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: containerd://1.3.2\n Kubelet Version: v1.13.12\n Kube-Proxy Version: v1.13.12\nPodCIDR: 10.244.0.0/24\nNon-terminated Pods: (7 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE\n --------- ---- ------------ ---------- --------------- ------------- ---\n kube-system etcd-hunter-control-plane 0 (0%) 0 (0%) 0 (0%) 0 (0%) 52d\n kube-system kindnet-l2xm6 100m (0%) 100m (0%) 50Mi (0%) 50Mi (0%) 52d\n kube-system kube-apiserver-hunter-control-plane 250m (1%) 0 (0%) 0 (0%) 0 (0%) 52d\n kube-system kube-controller-manager-hunter-control-plane 200m (1%) 0 (0%) 0 (0%) 0 (0%) 52d\n kube-system kube-proxy-mmppc 0 (0%) 0 (0%) 0 (0%) 0 (0%) 52d\n kube-system kube-scheduler-hunter-control-plane 100m (0%) 0 (0%) 0 (0%) 0 (0%) 52d\n local-path-storage local-path-provisioner-77cfdd744c-q47vg 0 (0%) 0 (0%) 0 (0%) 0 (0%) 52d\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 650m (4%) 100m (0%)\n memory 50Mi (0%) 50Mi (0%)\n ephemeral-storage 0 (0%) 0 (0%)\nEvents: \n" May 6 18:53:29.187: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe namespace e2e-tests-kubectl-t9gbp' May 6 18:53:29.313: INFO: stderr: "" May 6 18:53:29.313: INFO: stdout: "Name: e2e-tests-kubectl-t9gbp\nLabels: e2e-framework=kubectl\n e2e-run=573e746e-8fbd-11ea-a618-0242ac110019\nAnnotations: \nStatus: Active\n\nNo resource quota.\n\nNo resource limits.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 18:53:29.313: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-t9gbp" for this suite. May 6 18:53:51.372: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 18:53:51.399: INFO: namespace: e2e-tests-kubectl-t9gbp, resource: bindings, ignored listing per whitelist May 6 18:53:51.435: INFO: namespace e2e-tests-kubectl-t9gbp deletion completed in 22.1190378s • [SLOW TEST:36.722 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl describe /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 18:53:51.436: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-map-ed52e633-8fca-11ea-a618-0242ac110019 STEP: Creating a pod to test consume secrets May 6 18:53:51.643: INFO: Waiting up to 5m0s for pod "pod-secrets-ed553ef7-8fca-11ea-a618-0242ac110019" in namespace "e2e-tests-secrets-lpk5t" to be "success or failure" May 6 18:53:51.738: INFO: Pod "pod-secrets-ed553ef7-8fca-11ea-a618-0242ac110019": Phase="Pending", Reason="", readiness=false. Elapsed: 94.66996ms May 6 18:53:53.742: INFO: Pod "pod-secrets-ed553ef7-8fca-11ea-a618-0242ac110019": Phase="Pending", Reason="", readiness=false. Elapsed: 2.099011035s May 6 18:53:56.341: INFO: Pod "pod-secrets-ed553ef7-8fca-11ea-a618-0242ac110019": Phase="Pending", Reason="", readiness=false. Elapsed: 4.698466594s May 6 18:53:58.344: INFO: Pod "pod-secrets-ed553ef7-8fca-11ea-a618-0242ac110019": Phase="Pending", Reason="", readiness=false. Elapsed: 6.701134194s May 6 18:54:00.348: INFO: Pod "pod-secrets-ed553ef7-8fca-11ea-a618-0242ac110019": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.70461349s STEP: Saw pod success May 6 18:54:00.348: INFO: Pod "pod-secrets-ed553ef7-8fca-11ea-a618-0242ac110019" satisfied condition "success or failure" May 6 18:54:00.350: INFO: Trying to get logs from node hunter-worker2 pod pod-secrets-ed553ef7-8fca-11ea-a618-0242ac110019 container secret-volume-test: STEP: delete the pod May 6 18:54:01.313: INFO: Waiting for pod pod-secrets-ed553ef7-8fca-11ea-a618-0242ac110019 to disappear May 6 18:54:01.447: INFO: Pod pod-secrets-ed553ef7-8fca-11ea-a618-0242ac110019 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 18:54:01.447: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-lpk5t" for this suite. May 6 18:54:09.648: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 18:54:09.906: INFO: namespace: e2e-tests-secrets-lpk5t, resource: bindings, ignored listing per whitelist May 6 18:54:09.950: INFO: namespace e2e-tests-secrets-lpk5t deletion completed in 8.49873732s • [SLOW TEST:18.515 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 18:54:09.951: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-n8lb4 May 6 18:54:16.608: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-n8lb4 STEP: checking the pod's current state and verifying that restartCount is present May 6 18:54:16.611: INFO: Initial restart count of pod liveness-http is 0 May 6 18:54:36.652: INFO: Restart count of pod e2e-tests-container-probe-n8lb4/liveness-http is now 1 (20.040834593s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 18:54:38.296: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-n8lb4" for this suite. May 6 18:54:44.969: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 18:54:44.997: INFO: namespace: e2e-tests-container-probe-n8lb4, resource: bindings, ignored listing per whitelist May 6 18:54:45.032: INFO: namespace e2e-tests-container-probe-n8lb4 deletion completed in 6.694179917s • [SLOW TEST:35.081 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 18:54:45.032: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook May 6 18:54:59.631: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 6 18:54:59.640: INFO: Pod pod-with-prestop-exec-hook still exists May 6 18:55:01.641: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 6 18:55:01.645: INFO: Pod pod-with-prestop-exec-hook still exists May 6 18:55:03.641: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 6 18:55:03.646: INFO: Pod pod-with-prestop-exec-hook still exists May 6 18:55:05.641: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 6 18:55:05.645: INFO: Pod pod-with-prestop-exec-hook still exists May 6 18:55:07.641: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 6 18:55:07.912: INFO: Pod pod-with-prestop-exec-hook still exists May 6 18:55:09.641: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 6 18:55:09.645: INFO: Pod pod-with-prestop-exec-hook still exists May 6 18:55:11.641: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 6 18:55:11.680: INFO: Pod pod-with-prestop-exec-hook still exists May 6 18:55:13.641: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 6 18:55:14.174: INFO: Pod pod-with-prestop-exec-hook still exists May 6 18:55:15.641: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 6 18:55:15.685: INFO: Pod pod-with-prestop-exec-hook still exists May 6 18:55:17.641: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 6 18:55:17.645: INFO: Pod pod-with-prestop-exec-hook still exists May 6 18:55:19.641: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 6 18:55:19.645: INFO: Pod pod-with-prestop-exec-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 18:55:19.651: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-9km29" for this suite. May 6 18:55:43.959: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 18:55:43.993: INFO: namespace: e2e-tests-container-lifecycle-hook-9km29, resource: bindings, ignored listing per whitelist May 6 18:55:44.034: INFO: namespace e2e-tests-container-lifecycle-hook-9km29 deletion completed in 24.380175428s • [SLOW TEST:59.002 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40 should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 18:55:44.035: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not cause race condition when used for configmaps [Serial] [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating 50 configmaps STEP: Creating RC which spawns configmap-volume pods May 6 18:55:45.721: INFO: Pod name wrapped-volume-race-31500e89-8fcb-11ea-a618-0242ac110019: Found 0 pods out of 5 May 6 18:55:50.730: INFO: Pod name wrapped-volume-race-31500e89-8fcb-11ea-a618-0242ac110019: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-31500e89-8fcb-11ea-a618-0242ac110019 in namespace e2e-tests-emptydir-wrapper-pr64m, will wait for the garbage collector to delete the pods May 6 18:57:43.685: INFO: Deleting ReplicationController wrapped-volume-race-31500e89-8fcb-11ea-a618-0242ac110019 took: 346.076132ms May 6 18:57:43.986: INFO: Terminating ReplicationController wrapped-volume-race-31500e89-8fcb-11ea-a618-0242ac110019 pods took: 300.265813ms STEP: Creating RC which spawns configmap-volume pods May 6 18:58:21.570: INFO: Pod name wrapped-volume-race-8e30003b-8fcb-11ea-a618-0242ac110019: Found 0 pods out of 5 May 6 18:58:26.578: INFO: Pod name wrapped-volume-race-8e30003b-8fcb-11ea-a618-0242ac110019: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-8e30003b-8fcb-11ea-a618-0242ac110019 in namespace e2e-tests-emptydir-wrapper-pr64m, will wait for the garbage collector to delete the pods May 6 19:00:04.659: INFO: Deleting ReplicationController wrapped-volume-race-8e30003b-8fcb-11ea-a618-0242ac110019 took: 6.793866ms May 6 19:00:04.759: INFO: Terminating ReplicationController wrapped-volume-race-8e30003b-8fcb-11ea-a618-0242ac110019 pods took: 100.255583ms STEP: Creating RC which spawns configmap-volume pods May 6 19:00:51.426: INFO: Pod name wrapped-volume-race-e784d4d8-8fcb-11ea-a618-0242ac110019: Found 0 pods out of 5 May 6 19:00:56.434: INFO: Pod name wrapped-volume-race-e784d4d8-8fcb-11ea-a618-0242ac110019: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-e784d4d8-8fcb-11ea-a618-0242ac110019 in namespace e2e-tests-emptydir-wrapper-pr64m, will wait for the garbage collector to delete the pods May 6 19:02:32.516: INFO: Deleting ReplicationController wrapped-volume-race-e784d4d8-8fcb-11ea-a618-0242ac110019 took: 7.884355ms May 6 19:02:32.716: INFO: Terminating ReplicationController wrapped-volume-race-e784d4d8-8fcb-11ea-a618-0242ac110019 pods took: 200.291422ms STEP: Cleaning up the configMaps [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 19:03:12.693: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-wrapper-pr64m" for this suite. May 6 19:03:20.779: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 19:03:20.853: INFO: namespace: e2e-tests-emptydir-wrapper-pr64m, resource: bindings, ignored listing per whitelist May 6 19:03:20.855: INFO: namespace e2e-tests-emptydir-wrapper-pr64m deletion completed in 8.090732809s • [SLOW TEST:456.820 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 should not cause race condition when used for configmaps [Serial] [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 19:03:20.855: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 19:03:25.061: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubelet-test-tww44" for this suite. May 6 19:03:31.078: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 19:03:31.102: INFO: namespace: e2e-tests-kubelet-test-tww44, resource: bindings, ignored listing per whitelist May 6 19:03:31.150: INFO: namespace e2e-tests-kubelet-test-tww44 deletion completed in 6.084218284s • [SLOW TEST:10.295 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78 should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 19:03:31.150: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:295 [It] should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the initial replication controller May 6 19:03:31.229: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-tm4vr' May 6 19:03:34.240: INFO: stderr: "" May 6 19:03:34.240: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. May 6 19:03:34.241: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-tm4vr' May 6 19:03:34.402: INFO: stderr: "" May 6 19:03:34.402: INFO: stdout: "update-demo-nautilus-ctmst update-demo-nautilus-vkz7b " May 6 19:03:34.402: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-ctmst -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-tm4vr' May 6 19:03:34.518: INFO: stderr: "" May 6 19:03:34.518: INFO: stdout: "" May 6 19:03:34.518: INFO: update-demo-nautilus-ctmst is created but not running May 6 19:03:39.519: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-tm4vr' May 6 19:03:39.626: INFO: stderr: "" May 6 19:03:39.626: INFO: stdout: "update-demo-nautilus-ctmst update-demo-nautilus-vkz7b " May 6 19:03:39.627: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-ctmst -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-tm4vr' May 6 19:03:39.725: INFO: stderr: "" May 6 19:03:39.725: INFO: stdout: "true" May 6 19:03:39.726: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-ctmst -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-tm4vr' May 6 19:03:39.820: INFO: stderr: "" May 6 19:03:39.820: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 6 19:03:39.820: INFO: validating pod update-demo-nautilus-ctmst May 6 19:03:39.824: INFO: got data: { "image": "nautilus.jpg" } May 6 19:03:39.824: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 6 19:03:39.824: INFO: update-demo-nautilus-ctmst is verified up and running May 6 19:03:39.824: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-vkz7b -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-tm4vr' May 6 19:03:39.933: INFO: stderr: "" May 6 19:03:39.933: INFO: stdout: "true" May 6 19:03:39.933: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-vkz7b -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-tm4vr' May 6 19:03:40.035: INFO: stderr: "" May 6 19:03:40.035: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 6 19:03:40.035: INFO: validating pod update-demo-nautilus-vkz7b May 6 19:03:40.039: INFO: got data: { "image": "nautilus.jpg" } May 6 19:03:40.039: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 6 19:03:40.039: INFO: update-demo-nautilus-vkz7b is verified up and running STEP: rolling-update to new replication controller May 6 19:03:40.041: INFO: scanned /root for discovery docs: May 6 19:03:40.041: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=e2e-tests-kubectl-tm4vr' May 6 19:04:03.071: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" May 6 19:04:03.071: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n" STEP: waiting for all containers in name=update-demo pods to come up. May 6 19:04:03.071: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-tm4vr' May 6 19:04:03.213: INFO: stderr: "" May 6 19:04:03.213: INFO: stdout: "update-demo-kitten-t5vrj update-demo-kitten-t6ffw " May 6 19:04:03.213: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-t5vrj -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-tm4vr' May 6 19:04:03.305: INFO: stderr: "" May 6 19:04:03.305: INFO: stdout: "true" May 6 19:04:03.305: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-t5vrj -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-tm4vr' May 6 19:04:03.396: INFO: stderr: "" May 6 19:04:03.396: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" May 6 19:04:03.396: INFO: validating pod update-demo-kitten-t5vrj May 6 19:04:03.400: INFO: got data: { "image": "kitten.jpg" } May 6 19:04:03.400: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . May 6 19:04:03.400: INFO: update-demo-kitten-t5vrj is verified up and running May 6 19:04:03.400: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-t6ffw -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-tm4vr' May 6 19:04:03.530: INFO: stderr: "" May 6 19:04:03.530: INFO: stdout: "true" May 6 19:04:03.530: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-t6ffw -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-tm4vr' May 6 19:04:03.626: INFO: stderr: "" May 6 19:04:03.626: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" May 6 19:04:03.626: INFO: validating pod update-demo-kitten-t6ffw May 6 19:04:03.630: INFO: got data: { "image": "kitten.jpg" } May 6 19:04:03.630: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . May 6 19:04:03.630: INFO: update-demo-kitten-t6ffw is verified up and running [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 19:04:03.630: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-tm4vr" for this suite. May 6 19:04:25.913: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 19:04:25.968: INFO: namespace: e2e-tests-kubectl-tm4vr, resource: bindings, ignored listing per whitelist May 6 19:04:25.986: INFO: namespace e2e-tests-kubectl-tm4vr deletion completed in 22.353123071s • [SLOW TEST:54.837 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 19:04:25.987: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating the pod May 6 19:04:30.661: INFO: Successfully updated pod "annotationupdate6786ba61-8fcc-11ea-a618-0242ac110019" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 19:04:32.717: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-z7r5z" for this suite. May 6 19:04:54.740: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 19:04:54.795: INFO: namespace: e2e-tests-downward-api-z7r5z, resource: bindings, ignored listing per whitelist May 6 19:04:54.837: INFO: namespace e2e-tests-downward-api-z7r5z deletion completed in 22.115564086s • [SLOW TEST:28.850 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 19:04:54.837: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward api env vars May 6 19:04:54.947: INFO: Waiting up to 5m0s for pod "downward-api-78afda2a-8fcc-11ea-a618-0242ac110019" in namespace "e2e-tests-downward-api-bwkrq" to be "success or failure" May 6 19:04:54.986: INFO: Pod "downward-api-78afda2a-8fcc-11ea-a618-0242ac110019": Phase="Pending", Reason="", readiness=false. Elapsed: 38.866277ms May 6 19:04:56.991: INFO: Pod "downward-api-78afda2a-8fcc-11ea-a618-0242ac110019": Phase="Pending", Reason="", readiness=false. Elapsed: 2.043179418s May 6 19:04:58.994: INFO: Pod "downward-api-78afda2a-8fcc-11ea-a618-0242ac110019": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.047051139s STEP: Saw pod success May 6 19:04:58.994: INFO: Pod "downward-api-78afda2a-8fcc-11ea-a618-0242ac110019" satisfied condition "success or failure" May 6 19:04:58.996: INFO: Trying to get logs from node hunter-worker pod downward-api-78afda2a-8fcc-11ea-a618-0242ac110019 container dapi-container: STEP: delete the pod May 6 19:04:59.030: INFO: Waiting for pod downward-api-78afda2a-8fcc-11ea-a618-0242ac110019 to disappear May 6 19:04:59.035: INFO: Pod downward-api-78afda2a-8fcc-11ea-a618-0242ac110019 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 19:04:59.035: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-bwkrq" for this suite. May 6 19:05:05.127: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 19:05:05.143: INFO: namespace: e2e-tests-downward-api-bwkrq, resource: bindings, ignored listing per whitelist May 6 19:05:05.247: INFO: namespace e2e-tests-downward-api-bwkrq deletion completed in 6.209376779s • [SLOW TEST:10.410 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38 should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 19:05:05.248: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74 STEP: Creating service test in namespace e2e-tests-statefulset-flwrv [It] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a new StatefulSet May 6 19:05:05.390: INFO: Found 0 stateful pods, waiting for 3 May 6 19:05:15.396: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 6 19:05:15.396: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 6 19:05:15.396: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false May 6 19:05:25.395: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 6 19:05:25.395: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 6 19:05:25.395: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true May 6 19:05:25.407: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-flwrv ss2-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' May 6 19:05:25.644: INFO: stderr: "I0506 19:05:25.542877 3738 log.go:172] (0xc00085e2c0) (0xc000697360) Create stream\nI0506 19:05:25.542988 3738 log.go:172] (0xc00085e2c0) (0xc000697360) Stream added, broadcasting: 1\nI0506 19:05:25.545805 3738 log.go:172] (0xc00085e2c0) Reply frame received for 1\nI0506 19:05:25.545840 3738 log.go:172] (0xc00085e2c0) (0xc0002f8000) Create stream\nI0506 19:05:25.545854 3738 log.go:172] (0xc00085e2c0) (0xc0002f8000) Stream added, broadcasting: 3\nI0506 19:05:25.546928 3738 log.go:172] (0xc00085e2c0) Reply frame received for 3\nI0506 19:05:25.546968 3738 log.go:172] (0xc00085e2c0) (0xc000697400) Create stream\nI0506 19:05:25.546983 3738 log.go:172] (0xc00085e2c0) (0xc000697400) Stream added, broadcasting: 5\nI0506 19:05:25.547975 3738 log.go:172] (0xc00085e2c0) Reply frame received for 5\nI0506 19:05:25.635907 3738 log.go:172] (0xc00085e2c0) Data frame received for 3\nI0506 19:05:25.635929 3738 log.go:172] (0xc0002f8000) (3) Data frame handling\nI0506 19:05:25.635938 3738 log.go:172] (0xc0002f8000) (3) Data frame sent\nI0506 19:05:25.636248 3738 log.go:172] (0xc00085e2c0) Data frame received for 3\nI0506 19:05:25.636263 3738 log.go:172] (0xc0002f8000) (3) Data frame handling\nI0506 19:05:25.636442 3738 log.go:172] (0xc00085e2c0) Data frame received for 5\nI0506 19:05:25.636457 3738 log.go:172] (0xc000697400) (5) Data frame handling\nI0506 19:05:25.639297 3738 log.go:172] (0xc00085e2c0) Data frame received for 1\nI0506 19:05:25.639324 3738 log.go:172] (0xc000697360) (1) Data frame handling\nI0506 19:05:25.639345 3738 log.go:172] (0xc000697360) (1) Data frame sent\nI0506 19:05:25.639361 3738 log.go:172] (0xc00085e2c0) (0xc000697360) Stream removed, broadcasting: 1\nI0506 19:05:25.639382 3738 log.go:172] (0xc00085e2c0) Go away received\nI0506 19:05:25.639654 3738 log.go:172] (0xc00085e2c0) (0xc000697360) Stream removed, broadcasting: 1\nI0506 19:05:25.639703 3738 log.go:172] (0xc00085e2c0) (0xc0002f8000) Stream removed, broadcasting: 3\nI0506 19:05:25.639722 3738 log.go:172] (0xc00085e2c0) (0xc000697400) Stream removed, broadcasting: 5\n" May 6 19:05:25.644: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" May 6 19:05:25.644: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' STEP: Updating StatefulSet template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine May 6 19:05:35.679: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Updating Pods in reverse ordinal order May 6 19:05:45.717: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-flwrv ss2-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 6 19:05:45.919: INFO: stderr: "I0506 19:05:45.849353 3761 log.go:172] (0xc0003a0370) (0xc00074e640) Create stream\nI0506 19:05:45.849414 3761 log.go:172] (0xc0003a0370) (0xc00074e640) Stream added, broadcasting: 1\nI0506 19:05:45.851990 3761 log.go:172] (0xc0003a0370) Reply frame received for 1\nI0506 19:05:45.852075 3761 log.go:172] (0xc0003a0370) (0xc0005fabe0) Create stream\nI0506 19:05:45.852115 3761 log.go:172] (0xc0003a0370) (0xc0005fabe0) Stream added, broadcasting: 3\nI0506 19:05:45.853465 3761 log.go:172] (0xc0003a0370) Reply frame received for 3\nI0506 19:05:45.853518 3761 log.go:172] (0xc0003a0370) (0xc0001a0000) Create stream\nI0506 19:05:45.853541 3761 log.go:172] (0xc0003a0370) (0xc0001a0000) Stream added, broadcasting: 5\nI0506 19:05:45.854742 3761 log.go:172] (0xc0003a0370) Reply frame received for 5\nI0506 19:05:45.913578 3761 log.go:172] (0xc0003a0370) Data frame received for 5\nI0506 19:05:45.913621 3761 log.go:172] (0xc0001a0000) (5) Data frame handling\nI0506 19:05:45.913650 3761 log.go:172] (0xc0003a0370) Data frame received for 3\nI0506 19:05:45.913660 3761 log.go:172] (0xc0005fabe0) (3) Data frame handling\nI0506 19:05:45.913672 3761 log.go:172] (0xc0005fabe0) (3) Data frame sent\nI0506 19:05:45.913683 3761 log.go:172] (0xc0003a0370) Data frame received for 3\nI0506 19:05:45.913691 3761 log.go:172] (0xc0005fabe0) (3) Data frame handling\nI0506 19:05:45.915000 3761 log.go:172] (0xc0003a0370) Data frame received for 1\nI0506 19:05:45.915031 3761 log.go:172] (0xc00074e640) (1) Data frame handling\nI0506 19:05:45.915045 3761 log.go:172] (0xc00074e640) (1) Data frame sent\nI0506 19:05:45.915060 3761 log.go:172] (0xc0003a0370) (0xc00074e640) Stream removed, broadcasting: 1\nI0506 19:05:45.915086 3761 log.go:172] (0xc0003a0370) Go away received\nI0506 19:05:45.915510 3761 log.go:172] (0xc0003a0370) (0xc00074e640) Stream removed, broadcasting: 1\nI0506 19:05:45.915548 3761 log.go:172] (0xc0003a0370) (0xc0005fabe0) Stream removed, broadcasting: 3\nI0506 19:05:45.915565 3761 log.go:172] (0xc0003a0370) (0xc0001a0000) Stream removed, broadcasting: 5\n" May 6 19:05:45.919: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" May 6 19:05:45.919: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' May 6 19:06:05.952: INFO: Waiting for StatefulSet e2e-tests-statefulset-flwrv/ss2 to complete update May 6 19:06:05.952: INFO: Waiting for Pod e2e-tests-statefulset-flwrv/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c May 6 19:06:15.960: INFO: Waiting for StatefulSet e2e-tests-statefulset-flwrv/ss2 to complete update STEP: Rolling back to a previous revision May 6 19:06:25.961: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-flwrv ss2-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' May 6 19:06:26.223: INFO: stderr: "I0506 19:06:26.086541 3784 log.go:172] (0xc00014c630) (0xc000758640) Create stream\nI0506 19:06:26.086596 3784 log.go:172] (0xc00014c630) (0xc000758640) Stream added, broadcasting: 1\nI0506 19:06:26.088532 3784 log.go:172] (0xc00014c630) Reply frame received for 1\nI0506 19:06:26.088571 3784 log.go:172] (0xc00014c630) (0xc000620dc0) Create stream\nI0506 19:06:26.088583 3784 log.go:172] (0xc00014c630) (0xc000620dc0) Stream added, broadcasting: 3\nI0506 19:06:26.089617 3784 log.go:172] (0xc00014c630) Reply frame received for 3\nI0506 19:06:26.089668 3784 log.go:172] (0xc00014c630) (0xc0003a6000) Create stream\nI0506 19:06:26.089679 3784 log.go:172] (0xc00014c630) (0xc0003a6000) Stream added, broadcasting: 5\nI0506 19:06:26.090512 3784 log.go:172] (0xc00014c630) Reply frame received for 5\nI0506 19:06:26.215584 3784 log.go:172] (0xc00014c630) Data frame received for 3\nI0506 19:06:26.215604 3784 log.go:172] (0xc000620dc0) (3) Data frame handling\nI0506 19:06:26.215612 3784 log.go:172] (0xc000620dc0) (3) Data frame sent\nI0506 19:06:26.215637 3784 log.go:172] (0xc00014c630) Data frame received for 5\nI0506 19:06:26.215681 3784 log.go:172] (0xc0003a6000) (5) Data frame handling\nI0506 19:06:26.215718 3784 log.go:172] (0xc00014c630) Data frame received for 3\nI0506 19:06:26.215740 3784 log.go:172] (0xc000620dc0) (3) Data frame handling\nI0506 19:06:26.217857 3784 log.go:172] (0xc00014c630) Data frame received for 1\nI0506 19:06:26.217886 3784 log.go:172] (0xc000758640) (1) Data frame handling\nI0506 19:06:26.217901 3784 log.go:172] (0xc000758640) (1) Data frame sent\nI0506 19:06:26.217916 3784 log.go:172] (0xc00014c630) (0xc000758640) Stream removed, broadcasting: 1\nI0506 19:06:26.217938 3784 log.go:172] (0xc00014c630) Go away received\nI0506 19:06:26.218200 3784 log.go:172] (0xc00014c630) (0xc000758640) Stream removed, broadcasting: 1\nI0506 19:06:26.218219 3784 log.go:172] (0xc00014c630) (0xc000620dc0) Stream removed, broadcasting: 3\nI0506 19:06:26.218225 3784 log.go:172] (0xc00014c630) (0xc0003a6000) Stream removed, broadcasting: 5\n" May 6 19:06:26.223: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" May 6 19:06:26.223: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' May 6 19:06:36.276: INFO: Updating stateful set ss2 STEP: Rolling back update in reverse ordinal order May 6 19:06:46.317: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-flwrv ss2-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 6 19:06:46.540: INFO: stderr: "I0506 19:06:46.438471 3807 log.go:172] (0xc00013a6e0) (0xc0005b92c0) Create stream\nI0506 19:06:46.438528 3807 log.go:172] (0xc00013a6e0) (0xc0005b92c0) Stream added, broadcasting: 1\nI0506 19:06:46.440934 3807 log.go:172] (0xc00013a6e0) Reply frame received for 1\nI0506 19:06:46.440991 3807 log.go:172] (0xc00013a6e0) (0xc00075a000) Create stream\nI0506 19:06:46.441003 3807 log.go:172] (0xc00013a6e0) (0xc00075a000) Stream added, broadcasting: 3\nI0506 19:06:46.442382 3807 log.go:172] (0xc00013a6e0) Reply frame received for 3\nI0506 19:06:46.442446 3807 log.go:172] (0xc00013a6e0) (0xc000274000) Create stream\nI0506 19:06:46.442478 3807 log.go:172] (0xc00013a6e0) (0xc000274000) Stream added, broadcasting: 5\nI0506 19:06:46.443558 3807 log.go:172] (0xc00013a6e0) Reply frame received for 5\nI0506 19:06:46.535004 3807 log.go:172] (0xc00013a6e0) Data frame received for 5\nI0506 19:06:46.535067 3807 log.go:172] (0xc000274000) (5) Data frame handling\nI0506 19:06:46.535098 3807 log.go:172] (0xc00013a6e0) Data frame received for 3\nI0506 19:06:46.535115 3807 log.go:172] (0xc00075a000) (3) Data frame handling\nI0506 19:06:46.535133 3807 log.go:172] (0xc00075a000) (3) Data frame sent\nI0506 19:06:46.535146 3807 log.go:172] (0xc00013a6e0) Data frame received for 3\nI0506 19:06:46.535158 3807 log.go:172] (0xc00075a000) (3) Data frame handling\nI0506 19:06:46.536340 3807 log.go:172] (0xc00013a6e0) Data frame received for 1\nI0506 19:06:46.536365 3807 log.go:172] (0xc0005b92c0) (1) Data frame handling\nI0506 19:06:46.536387 3807 log.go:172] (0xc0005b92c0) (1) Data frame sent\nI0506 19:06:46.536404 3807 log.go:172] (0xc00013a6e0) (0xc0005b92c0) Stream removed, broadcasting: 1\nI0506 19:06:46.536429 3807 log.go:172] (0xc00013a6e0) Go away received\nI0506 19:06:46.536573 3807 log.go:172] (0xc00013a6e0) (0xc0005b92c0) Stream removed, broadcasting: 1\nI0506 19:06:46.536589 3807 log.go:172] (0xc00013a6e0) (0xc00075a000) Stream removed, broadcasting: 3\nI0506 19:06:46.536597 3807 log.go:172] (0xc00013a6e0) (0xc000274000) Stream removed, broadcasting: 5\n" May 6 19:06:46.540: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" May 6 19:06:46.540: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' May 6 19:07:06.570: INFO: Waiting for StatefulSet e2e-tests-statefulset-flwrv/ss2 to complete update May 6 19:07:06.570: INFO: Waiting for Pod e2e-tests-statefulset-flwrv/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85 May 6 19:07:16.577: INFO: Deleting all statefulset in ns e2e-tests-statefulset-flwrv May 6 19:07:16.580: INFO: Scaling statefulset ss2 to 0 May 6 19:07:26.602: INFO: Waiting for statefulset status.replicas updated to 0 May 6 19:07:26.606: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 19:07:26.616: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-statefulset-flwrv" for this suite. May 6 19:07:34.664: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 19:07:34.681: INFO: namespace: e2e-tests-statefulset-flwrv, resource: bindings, ignored listing per whitelist May 6 19:07:34.729: INFO: namespace e2e-tests-statefulset-flwrv deletion completed in 8.109757622s • [SLOW TEST:149.482 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 19:07:34.730: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap e2e-tests-configmap-nr74t/configmap-test-d805330e-8fcc-11ea-a618-0242ac110019 STEP: Creating a pod to test consume configMaps May 6 19:07:34.890: INFO: Waiting up to 5m0s for pod "pod-configmaps-d80a20d4-8fcc-11ea-a618-0242ac110019" in namespace "e2e-tests-configmap-nr74t" to be "success or failure" May 6 19:07:34.897: INFO: Pod "pod-configmaps-d80a20d4-8fcc-11ea-a618-0242ac110019": Phase="Pending", Reason="", readiness=false. Elapsed: 6.792671ms May 6 19:07:36.901: INFO: Pod "pod-configmaps-d80a20d4-8fcc-11ea-a618-0242ac110019": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010754265s May 6 19:07:38.910: INFO: Pod "pod-configmaps-d80a20d4-8fcc-11ea-a618-0242ac110019": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.019693466s STEP: Saw pod success May 6 19:07:38.910: INFO: Pod "pod-configmaps-d80a20d4-8fcc-11ea-a618-0242ac110019" satisfied condition "success or failure" May 6 19:07:38.912: INFO: Trying to get logs from node hunter-worker pod pod-configmaps-d80a20d4-8fcc-11ea-a618-0242ac110019 container env-test: STEP: delete the pod May 6 19:07:38.932: INFO: Waiting for pod pod-configmaps-d80a20d4-8fcc-11ea-a618-0242ac110019 to disappear May 6 19:07:38.942: INFO: Pod pod-configmaps-d80a20d4-8fcc-11ea-a618-0242ac110019 no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 19:07:38.942: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-nr74t" for this suite. May 6 19:07:44.982: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 19:07:45.010: INFO: namespace: e2e-tests-configmap-nr74t, resource: bindings, ignored listing per whitelist May 6 19:07:45.049: INFO: namespace e2e-tests-configmap-nr74t deletion completed in 6.104144799s • [SLOW TEST:10.320 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 19:07:45.049: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin May 6 19:07:45.220: INFO: Waiting up to 5m0s for pod "downwardapi-volume-de30d89b-8fcc-11ea-a618-0242ac110019" in namespace "e2e-tests-downward-api-kbqj4" to be "success or failure" May 6 19:07:45.242: INFO: Pod "downwardapi-volume-de30d89b-8fcc-11ea-a618-0242ac110019": Phase="Pending", Reason="", readiness=false. Elapsed: 22.150203ms May 6 19:07:47.268: INFO: Pod "downwardapi-volume-de30d89b-8fcc-11ea-a618-0242ac110019": Phase="Pending", Reason="", readiness=false. Elapsed: 2.047567432s May 6 19:07:49.271: INFO: Pod "downwardapi-volume-de30d89b-8fcc-11ea-a618-0242ac110019": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.051211231s STEP: Saw pod success May 6 19:07:49.271: INFO: Pod "downwardapi-volume-de30d89b-8fcc-11ea-a618-0242ac110019" satisfied condition "success or failure" May 6 19:07:49.274: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-de30d89b-8fcc-11ea-a618-0242ac110019 container client-container: STEP: delete the pod May 6 19:07:49.292: INFO: Waiting for pod downwardapi-volume-de30d89b-8fcc-11ea-a618-0242ac110019 to disappear May 6 19:07:49.296: INFO: Pod downwardapi-volume-de30d89b-8fcc-11ea-a618-0242ac110019 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 19:07:49.296: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-kbqj4" for this suite. May 6 19:07:55.461: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 19:07:55.512: INFO: namespace: e2e-tests-downward-api-kbqj4, resource: bindings, ignored listing per whitelist May 6 19:07:55.538: INFO: namespace e2e-tests-downward-api-kbqj4 deletion completed in 6.239529339s • [SLOW TEST:10.489 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 19:07:55.539: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-map-e471cb2a-8fcc-11ea-a618-0242ac110019 STEP: Creating a pod to test consume configMaps May 6 19:07:55.732: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-e473f6dc-8fcc-11ea-a618-0242ac110019" in namespace "e2e-tests-projected-2p8wj" to be "success or failure" May 6 19:07:55.736: INFO: Pod "pod-projected-configmaps-e473f6dc-8fcc-11ea-a618-0242ac110019": Phase="Pending", Reason="", readiness=false. Elapsed: 3.175509ms May 6 19:07:57.739: INFO: Pod "pod-projected-configmaps-e473f6dc-8fcc-11ea-a618-0242ac110019": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006805133s May 6 19:07:59.744: INFO: Pod "pod-projected-configmaps-e473f6dc-8fcc-11ea-a618-0242ac110019": Phase="Pending", Reason="", readiness=false. Elapsed: 4.011121788s May 6 19:08:01.747: INFO: Pod "pod-projected-configmaps-e473f6dc-8fcc-11ea-a618-0242ac110019": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.014734842s STEP: Saw pod success May 6 19:08:01.747: INFO: Pod "pod-projected-configmaps-e473f6dc-8fcc-11ea-a618-0242ac110019" satisfied condition "success or failure" May 6 19:08:01.750: INFO: Trying to get logs from node hunter-worker pod pod-projected-configmaps-e473f6dc-8fcc-11ea-a618-0242ac110019 container projected-configmap-volume-test: STEP: delete the pod May 6 19:08:01.858: INFO: Waiting for pod pod-projected-configmaps-e473f6dc-8fcc-11ea-a618-0242ac110019 to disappear May 6 19:08:01.892: INFO: Pod pod-projected-configmaps-e473f6dc-8fcc-11ea-a618-0242ac110019 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 19:08:01.892: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-2p8wj" for this suite. May 6 19:08:08.019: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 19:08:08.071: INFO: namespace: e2e-tests-projected-2p8wj, resource: bindings, ignored listing per whitelist May 6 19:08:08.092: INFO: namespace e2e-tests-projected-2p8wj deletion completed in 6.195595565s • [SLOW TEST:12.553 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 19:08:08.092: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod STEP: setting up watch STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: verifying pod creation was observed May 6 19:08:14.658: INFO: running pod: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-submit-remove-ebe29291-8fcc-11ea-a618-0242ac110019", GenerateName:"", Namespace:"e2e-tests-pods-h78vx", SelfLink:"/api/v1/namespaces/e2e-tests-pods-h78vx/pods/pod-submit-remove-ebe29291-8fcc-11ea-a618-0242ac110019", UID:"ec1491d0-8fcc-11ea-99e8-0242ac110002", ResourceVersion:"9103822", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63724388888, loc:(*time.Location)(0x7950ac0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"181632002"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-6fcw9", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc001eba140), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"nginx", Image:"docker.io/library/nginx:1.14-alpine", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-6fcw9", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc0020de808), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"hunter-worker2", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0021787e0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0020de850)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0020de870)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc0020de878), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc0020de87c)}, Status:v1.PodStatus{Phase:"Running", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724388888, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"Ready", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724388893, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"ContainersReady", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724388893, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724388888, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.17.0.4", PodIP:"10.244.2.124", StartTime:(*v1.Time)(0xc00275ad20), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"nginx", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(0xc00275adc0), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:true, RestartCount:0, Image:"docker.io/library/nginx:1.14-alpine", ImageID:"docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7", ContainerID:"containerd://ceceefb7e2a4484a70f12148c6d3acc820db7229cfa9a1894b6bc21b7d0e9c9f"}}, QOSClass:"BestEffort"}} STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice May 6 19:08:19.670: INFO: no pod exists with the name we were looking for, assuming the termination request was observed and completed STEP: verifying pod deletion was observed [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 19:08:19.674: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-h78vx" for this suite. May 6 19:08:25.703: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 19:08:25.726: INFO: namespace: e2e-tests-pods-h78vx, resource: bindings, ignored listing per whitelist May 6 19:08:25.772: INFO: namespace e2e-tests-pods-h78vx deletion completed in 6.095176908s • [SLOW TEST:17.680 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Proxy server should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 19:08:25.772: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Starting the proxy May 6 19:08:26.186: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix274065183/test' STEP: retrieving proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 19:08:26.266: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-2sv85" for this suite. May 6 19:08:32.415: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 19:08:32.464: INFO: namespace: e2e-tests-kubectl-2sv85, resource: bindings, ignored listing per whitelist May 6 19:08:32.483: INFO: namespace e2e-tests-kubectl-2sv85 deletion completed in 6.093906632s • [SLOW TEST:6.711 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Proxy server /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 19:08:32.483: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65 [It] deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 6 19:08:33.809: INFO: Pod name cleanup-pod: Found 0 pods out of 1 May 6 19:08:38.814: INFO: Pod name cleanup-pod: Found 1 pods out of 1 STEP: ensuring each pod is running May 6 19:08:38.814: INFO: Creating deployment test-cleanup-deployment STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59 May 6 19:08:38.838: INFO: Deployment "test-cleanup-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment,GenerateName:,Namespace:e2e-tests-deployment-h986z,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-h986z/deployments/test-cleanup-deployment,UID:fe25a49c-8fcc-11ea-99e8-0242ac110002,ResourceVersion:9103911,Generation:1,CreationTimestamp:2020-05-06 19:08:38 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[],ReadyReplicas:0,CollisionCount:nil,},} May 6 19:08:38.846: INFO: New ReplicaSet of Deployment "test-cleanup-deployment" is nil. May 6 19:08:38.846: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment": May 6 19:08:38.846: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-controller,GenerateName:,Namespace:e2e-tests-deployment-h986z,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-h986z/replicasets/test-cleanup-controller,UID:fb17d9e0-8fcc-11ea-99e8-0242ac110002,ResourceVersion:9103912,Generation:1,CreationTimestamp:2020-05-06 19:08:33 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 Deployment test-cleanup-deployment fe25a49c-8fcc-11ea-99e8-0242ac110002 0xc001c95687 0xc001c95688}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} May 6 19:08:38.906: INFO: Pod "test-cleanup-controller-4tqjg" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-controller-4tqjg,GenerateName:test-cleanup-controller-,Namespace:e2e-tests-deployment-h986z,SelfLink:/api/v1/namespaces/e2e-tests-deployment-h986z/pods/test-cleanup-controller-4tqjg,UID:fb29f7af-8fcc-11ea-99e8-0242ac110002,ResourceVersion:9103909,Generation:0,CreationTimestamp:2020-05-06 19:08:33 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-cleanup-controller fb17d9e0-8fcc-11ea-99e8-0242ac110002 0xc001c95f97 0xc001c95f98}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-jbnbj {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-jbnbj,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-jbnbj true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001cb2010} {node.kubernetes.io/unreachable Exists NoExecute 0xc001cb2030}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 19:08:33 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 19:08:38 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 19:08:38 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 19:08:33 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.3,PodIP:10.244.1.171,StartTime:2020-05-06 19:08:33 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-05-06 19:08:37 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://aecc0152ec79a20531e5bf08505408d644cdf11ef57c927bfe3d628d67b06a48}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 19:08:38.906: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-deployment-h986z" for this suite. May 6 19:08:49.052: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 19:08:49.063: INFO: namespace: e2e-tests-deployment-h986z, resource: bindings, ignored listing per whitelist May 6 19:08:49.122: INFO: namespace e2e-tests-deployment-h986z deletion completed in 10.156084736s • [SLOW TEST:16.639 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 19:08:49.122: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-pddrs STEP: creating a selector STEP: Creating the service pods in kubernetes May 6 19:08:49.232: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods May 6 19:09:15.503: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.127:8080/dial?request=hostName&protocol=udp&host=10.244.1.172&port=8081&tries=1'] Namespace:e2e-tests-pod-network-test-pddrs PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 6 19:09:15.503: INFO: >>> kubeConfig: /root/.kube/config I0506 19:09:15.538121 6 log.go:172] (0xc0007c5ad0) (0xc001bbf2c0) Create stream I0506 19:09:15.538154 6 log.go:172] (0xc0007c5ad0) (0xc001bbf2c0) Stream added, broadcasting: 1 I0506 19:09:15.540899 6 log.go:172] (0xc0007c5ad0) Reply frame received for 1 I0506 19:09:15.540926 6 log.go:172] (0xc0007c5ad0) (0xc000d55ae0) Create stream I0506 19:09:15.540936 6 log.go:172] (0xc0007c5ad0) (0xc000d55ae0) Stream added, broadcasting: 3 I0506 19:09:15.542248 6 log.go:172] (0xc0007c5ad0) Reply frame received for 3 I0506 19:09:15.542305 6 log.go:172] (0xc0007c5ad0) (0xc000d55b80) Create stream I0506 19:09:15.542321 6 log.go:172] (0xc0007c5ad0) (0xc000d55b80) Stream added, broadcasting: 5 I0506 19:09:15.543255 6 log.go:172] (0xc0007c5ad0) Reply frame received for 5 I0506 19:09:15.616101 6 log.go:172] (0xc0007c5ad0) Data frame received for 3 I0506 19:09:15.616126 6 log.go:172] (0xc000d55ae0) (3) Data frame handling I0506 19:09:15.616142 6 log.go:172] (0xc000d55ae0) (3) Data frame sent I0506 19:09:15.616908 6 log.go:172] (0xc0007c5ad0) Data frame received for 3 I0506 19:09:15.616928 6 log.go:172] (0xc000d55ae0) (3) Data frame handling I0506 19:09:15.616952 6 log.go:172] (0xc0007c5ad0) Data frame received for 5 I0506 19:09:15.616977 6 log.go:172] (0xc000d55b80) (5) Data frame handling I0506 19:09:15.619158 6 log.go:172] (0xc0007c5ad0) Data frame received for 1 I0506 19:09:15.619188 6 log.go:172] (0xc001bbf2c0) (1) Data frame handling I0506 19:09:15.619208 6 log.go:172] (0xc001bbf2c0) (1) Data frame sent I0506 19:09:15.619322 6 log.go:172] (0xc0007c5ad0) (0xc001bbf2c0) Stream removed, broadcasting: 1 I0506 19:09:15.619343 6 log.go:172] (0xc0007c5ad0) Go away received I0506 19:09:15.619434 6 log.go:172] (0xc0007c5ad0) (0xc001bbf2c0) Stream removed, broadcasting: 1 I0506 19:09:15.619452 6 log.go:172] (0xc0007c5ad0) (0xc000d55ae0) Stream removed, broadcasting: 3 I0506 19:09:15.619461 6 log.go:172] (0xc0007c5ad0) (0xc000d55b80) Stream removed, broadcasting: 5 May 6 19:09:15.619: INFO: Waiting for endpoints: map[] May 6 19:09:15.623: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.127:8080/dial?request=hostName&protocol=udp&host=10.244.2.126&port=8081&tries=1'] Namespace:e2e-tests-pod-network-test-pddrs PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 6 19:09:15.623: INFO: >>> kubeConfig: /root/.kube/config I0506 19:09:15.648609 6 log.go:172] (0xc0015c62c0) (0xc0009aae60) Create stream I0506 19:09:15.648648 6 log.go:172] (0xc0015c62c0) (0xc0009aae60) Stream added, broadcasting: 1 I0506 19:09:15.650810 6 log.go:172] (0xc0015c62c0) Reply frame received for 1 I0506 19:09:15.650865 6 log.go:172] (0xc0015c62c0) (0xc000d55c20) Create stream I0506 19:09:15.650886 6 log.go:172] (0xc0015c62c0) (0xc000d55c20) Stream added, broadcasting: 3 I0506 19:09:15.651623 6 log.go:172] (0xc0015c62c0) Reply frame received for 3 I0506 19:09:15.651658 6 log.go:172] (0xc0015c62c0) (0xc0004994a0) Create stream I0506 19:09:15.651672 6 log.go:172] (0xc0015c62c0) (0xc0004994a0) Stream added, broadcasting: 5 I0506 19:09:15.652365 6 log.go:172] (0xc0015c62c0) Reply frame received for 5 I0506 19:09:15.727393 6 log.go:172] (0xc0015c62c0) Data frame received for 3 I0506 19:09:15.727430 6 log.go:172] (0xc000d55c20) (3) Data frame handling I0506 19:09:15.727456 6 log.go:172] (0xc000d55c20) (3) Data frame sent I0506 19:09:15.728243 6 log.go:172] (0xc0015c62c0) Data frame received for 5 I0506 19:09:15.728303 6 log.go:172] (0xc0004994a0) (5) Data frame handling I0506 19:09:15.728335 6 log.go:172] (0xc0015c62c0) Data frame received for 3 I0506 19:09:15.728355 6 log.go:172] (0xc000d55c20) (3) Data frame handling I0506 19:09:15.730143 6 log.go:172] (0xc0015c62c0) Data frame received for 1 I0506 19:09:15.730176 6 log.go:172] (0xc0009aae60) (1) Data frame handling I0506 19:09:15.730203 6 log.go:172] (0xc0009aae60) (1) Data frame sent I0506 19:09:15.730231 6 log.go:172] (0xc0015c62c0) (0xc0009aae60) Stream removed, broadcasting: 1 I0506 19:09:15.730268 6 log.go:172] (0xc0015c62c0) Go away received I0506 19:09:15.730360 6 log.go:172] (0xc0015c62c0) (0xc0009aae60) Stream removed, broadcasting: 1 I0506 19:09:15.730428 6 log.go:172] (0xc0015c62c0) (0xc000d55c20) Stream removed, broadcasting: 3 I0506 19:09:15.730490 6 log.go:172] (0xc0015c62c0) (0xc0004994a0) Stream removed, broadcasting: 5 May 6 19:09:15.730: INFO: Waiting for endpoints: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 19:09:15.730: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pod-network-test-pddrs" for this suite. May 6 19:09:37.744: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 19:09:37.798: INFO: namespace: e2e-tests-pod-network-test-pddrs, resource: bindings, ignored listing per whitelist May 6 19:09:37.801: INFO: namespace e2e-tests-pod-network-test-pddrs deletion completed in 22.067052733s • [SLOW TEST:48.679 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 19:09:37.802: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-48hcz STEP: creating a selector STEP: Creating the service pods in kubernetes May 6 19:09:38.089: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods May 6 19:10:19.543: INFO: ExecWithOptions {Command:[/bin/sh -c echo 'hostName' | nc -w 1 -u 10.244.2.128 8081 | grep -v '^\s*$'] Namespace:e2e-tests-pod-network-test-48hcz PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 6 19:10:19.543: INFO: >>> kubeConfig: /root/.kube/config I0506 19:10:19.571894 6 log.go:172] (0xc0012122c0) (0xc001ba7a40) Create stream I0506 19:10:19.571932 6 log.go:172] (0xc0012122c0) (0xc001ba7a40) Stream added, broadcasting: 1 I0506 19:10:19.574345 6 log.go:172] (0xc0012122c0) Reply frame received for 1 I0506 19:10:19.574375 6 log.go:172] (0xc0012122c0) (0xc000c3da40) Create stream I0506 19:10:19.574383 6 log.go:172] (0xc0012122c0) (0xc000c3da40) Stream added, broadcasting: 3 I0506 19:10:19.575055 6 log.go:172] (0xc0012122c0) Reply frame received for 3 I0506 19:10:19.575088 6 log.go:172] (0xc0012122c0) (0xc001c6a8c0) Create stream I0506 19:10:19.575097 6 log.go:172] (0xc0012122c0) (0xc001c6a8c0) Stream added, broadcasting: 5 I0506 19:10:19.575826 6 log.go:172] (0xc0012122c0) Reply frame received for 5 I0506 19:10:20.655388 6 log.go:172] (0xc0012122c0) Data frame received for 3 I0506 19:10:20.655431 6 log.go:172] (0xc000c3da40) (3) Data frame handling I0506 19:10:20.655461 6 log.go:172] (0xc0012122c0) Data frame received for 5 I0506 19:10:20.655490 6 log.go:172] (0xc001c6a8c0) (5) Data frame handling I0506 19:10:20.655515 6 log.go:172] (0xc000c3da40) (3) Data frame sent I0506 19:10:20.655530 6 log.go:172] (0xc0012122c0) Data frame received for 3 I0506 19:10:20.655550 6 log.go:172] (0xc000c3da40) (3) Data frame handling I0506 19:10:20.657618 6 log.go:172] (0xc0012122c0) Data frame received for 1 I0506 19:10:20.657688 6 log.go:172] (0xc001ba7a40) (1) Data frame handling I0506 19:10:20.657720 6 log.go:172] (0xc001ba7a40) (1) Data frame sent I0506 19:10:20.657737 6 log.go:172] (0xc0012122c0) (0xc001ba7a40) Stream removed, broadcasting: 1 I0506 19:10:20.657756 6 log.go:172] (0xc0012122c0) Go away received I0506 19:10:20.657868 6 log.go:172] (0xc0012122c0) (0xc001ba7a40) Stream removed, broadcasting: 1 I0506 19:10:20.657888 6 log.go:172] (0xc0012122c0) (0xc000c3da40) Stream removed, broadcasting: 3 I0506 19:10:20.657897 6 log.go:172] (0xc0012122c0) (0xc001c6a8c0) Stream removed, broadcasting: 5 May 6 19:10:20.657: INFO: Found all expected endpoints: [netserver-0] May 6 19:10:20.660: INFO: ExecWithOptions {Command:[/bin/sh -c echo 'hostName' | nc -w 1 -u 10.244.1.173 8081 | grep -v '^\s*$'] Namespace:e2e-tests-pod-network-test-48hcz PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 6 19:10:20.660: INFO: >>> kubeConfig: /root/.kube/config I0506 19:10:20.691097 6 log.go:172] (0xc001520420) (0xc002090b40) Create stream I0506 19:10:20.691121 6 log.go:172] (0xc001520420) (0xc002090b40) Stream added, broadcasting: 1 I0506 19:10:20.693293 6 log.go:172] (0xc001520420) Reply frame received for 1 I0506 19:10:20.693330 6 log.go:172] (0xc001520420) (0xc002090be0) Create stream I0506 19:10:20.693347 6 log.go:172] (0xc001520420) (0xc002090be0) Stream added, broadcasting: 3 I0506 19:10:20.694370 6 log.go:172] (0xc001520420) Reply frame received for 3 I0506 19:10:20.694414 6 log.go:172] (0xc001520420) (0xc000d55e00) Create stream I0506 19:10:20.694429 6 log.go:172] (0xc001520420) (0xc000d55e00) Stream added, broadcasting: 5 I0506 19:10:20.695480 6 log.go:172] (0xc001520420) Reply frame received for 5 I0506 19:10:21.762293 6 log.go:172] (0xc001520420) Data frame received for 5 I0506 19:10:21.762334 6 log.go:172] (0xc000d55e00) (5) Data frame handling I0506 19:10:21.762363 6 log.go:172] (0xc001520420) Data frame received for 3 I0506 19:10:21.762378 6 log.go:172] (0xc002090be0) (3) Data frame handling I0506 19:10:21.762397 6 log.go:172] (0xc002090be0) (3) Data frame sent I0506 19:10:21.762409 6 log.go:172] (0xc001520420) Data frame received for 3 I0506 19:10:21.762421 6 log.go:172] (0xc002090be0) (3) Data frame handling I0506 19:10:21.764258 6 log.go:172] (0xc001520420) Data frame received for 1 I0506 19:10:21.764304 6 log.go:172] (0xc002090b40) (1) Data frame handling I0506 19:10:21.764327 6 log.go:172] (0xc002090b40) (1) Data frame sent I0506 19:10:21.764344 6 log.go:172] (0xc001520420) (0xc002090b40) Stream removed, broadcasting: 1 I0506 19:10:21.764367 6 log.go:172] (0xc001520420) Go away received I0506 19:10:21.764435 6 log.go:172] (0xc001520420) (0xc002090b40) Stream removed, broadcasting: 1 I0506 19:10:21.764458 6 log.go:172] (0xc001520420) (0xc002090be0) Stream removed, broadcasting: 3 I0506 19:10:21.764467 6 log.go:172] (0xc001520420) (0xc000d55e00) Stream removed, broadcasting: 5 May 6 19:10:21.764: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 19:10:21.764: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pod-network-test-48hcz" for this suite. May 6 19:10:48.033: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 19:10:48.088: INFO: namespace: e2e-tests-pod-network-test-48hcz, resource: bindings, ignored listing per whitelist May 6 19:10:48.102: INFO: namespace e2e-tests-pod-network-test-48hcz deletion completed in 26.33270083s • [SLOW TEST:70.301 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for node-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 19:10:48.103: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: retrieving the pod May 6 19:11:00.272: INFO: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:send-events-4b47106e-8fcd-11ea-a618-0242ac110019,GenerateName:,Namespace:e2e-tests-events-ffmlj,SelfLink:/api/v1/namespaces/e2e-tests-events-ffmlj/pods/send-events-4b47106e-8fcd-11ea-a618-0242ac110019,UID:4b48bb27-8fcd-11ea-99e8-0242ac110002,ResourceVersion:9104351,Generation:0,CreationTimestamp:2020-05-06 19:10:48 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: foo,time: 223760777,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-hxrtx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-hxrtx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{p gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 [] [] [{ 0 80 TCP }] [] [] {map[] map[]} [{default-token-hxrtx true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00186c8b0} {node.kubernetes.io/unreachable Exists NoExecute 0xc00186c930}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 19:10:48 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 19:10:59 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 19:10:59 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 19:10:48 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.4,PodIP:10.244.2.130,StartTime:2020-05-06 19:10:48 +0000 UTC,ContainerStatuses:[{p {nil ContainerStateRunning{StartedAt:2020-05-06 19:10:58 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 gcr.io/kubernetes-e2e-test-images/serve-hostname@sha256:bab70473a6d8ef65a22625dc9a1b0f0452e811530fdbe77e4408523460177ff1 containerd://ca4d193ab228aef6d3920455f8439cd02122df5a01cdde41b3685d06edb6673f}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} STEP: checking for scheduler event about the pod May 6 19:11:02.276: INFO: Saw scheduler event for our pod. STEP: checking for kubelet event about the pod May 6 19:11:04.281: INFO: Saw kubelet event for our pod. STEP: deleting the pod [AfterEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 19:11:04.285: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-events-ffmlj" for this suite. May 6 19:11:46.721: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 19:11:47.021: INFO: namespace: e2e-tests-events-ffmlj, resource: bindings, ignored listing per whitelist May 6 19:11:47.067: INFO: namespace e2e-tests-events-ffmlj deletion completed in 42.590570891s • [SLOW TEST:58.965 seconds] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 19:11:47.068: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0777 on tmpfs May 6 19:11:48.534: INFO: Waiting up to 5m0s for pod "pod-6f0c161d-8fcd-11ea-a618-0242ac110019" in namespace "e2e-tests-emptydir-lsv47" to be "success or failure" May 6 19:11:48.557: INFO: Pod "pod-6f0c161d-8fcd-11ea-a618-0242ac110019": Phase="Pending", Reason="", readiness=false. Elapsed: 23.205111ms May 6 19:11:50.705: INFO: Pod "pod-6f0c161d-8fcd-11ea-a618-0242ac110019": Phase="Pending", Reason="", readiness=false. Elapsed: 2.170752147s May 6 19:11:52.708: INFO: Pod "pod-6f0c161d-8fcd-11ea-a618-0242ac110019": Phase="Pending", Reason="", readiness=false. Elapsed: 4.173733525s May 6 19:11:55.084: INFO: Pod "pod-6f0c161d-8fcd-11ea-a618-0242ac110019": Phase="Pending", Reason="", readiness=false. Elapsed: 6.549734653s May 6 19:11:57.087: INFO: Pod "pod-6f0c161d-8fcd-11ea-a618-0242ac110019": Phase="Pending", Reason="", readiness=false. Elapsed: 8.553046916s May 6 19:11:59.315: INFO: Pod "pod-6f0c161d-8fcd-11ea-a618-0242ac110019": Phase="Pending", Reason="", readiness=false. Elapsed: 10.780971722s May 6 19:12:02.203: INFO: Pod "pod-6f0c161d-8fcd-11ea-a618-0242ac110019": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.668805439s STEP: Saw pod success May 6 19:12:02.203: INFO: Pod "pod-6f0c161d-8fcd-11ea-a618-0242ac110019" satisfied condition "success or failure" May 6 19:12:02.210: INFO: Trying to get logs from node hunter-worker2 pod pod-6f0c161d-8fcd-11ea-a618-0242ac110019 container test-container: STEP: delete the pod May 6 19:12:02.664: INFO: Waiting for pod pod-6f0c161d-8fcd-11ea-a618-0242ac110019 to disappear May 6 19:12:02.741: INFO: Pod pod-6f0c161d-8fcd-11ea-a618-0242ac110019 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 19:12:02.741: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-lsv47" for this suite. May 6 19:12:09.180: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 19:12:09.223: INFO: namespace: e2e-tests-emptydir-lsv47, resource: bindings, ignored listing per whitelist May 6 19:12:09.238: INFO: namespace e2e-tests-emptydir-lsv47 deletion completed in 6.492229143s • [SLOW TEST:22.170 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-network] Services should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 19:12:09.238: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:85 [It] should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 19:12:10.529: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-services-chgnq" for this suite. May 6 19:12:16.885: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 19:12:17.184: INFO: namespace: e2e-tests-services-chgnq, resource: bindings, ignored listing per whitelist May 6 19:12:17.188: INFO: namespace e2e-tests-services-chgnq deletion completed in 6.65584403s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:90 • [SLOW TEST:7.951 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 19:12:17.189: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name cm-test-opt-del-8106a3a0-8fcd-11ea-a618-0242ac110019 STEP: Creating configMap with name cm-test-opt-upd-8106a3f8-8fcd-11ea-a618-0242ac110019 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-8106a3a0-8fcd-11ea-a618-0242ac110019 STEP: Updating configmap cm-test-opt-upd-8106a3f8-8fcd-11ea-a618-0242ac110019 STEP: Creating configMap with name cm-test-opt-create-8106a41e-8fcd-11ea-a618-0242ac110019 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 19:13:53.500: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-s8ft2" for this suite. May 6 19:14:17.664: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 19:14:17.687: INFO: namespace: e2e-tests-projected-s8ft2, resource: bindings, ignored listing per whitelist May 6 19:14:17.742: INFO: namespace e2e-tests-projected-s8ft2 deletion completed in 24.239422887s • [SLOW TEST:120.554 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-apps] Deployment deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 19:14:17.742: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65 [It] deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 6 19:14:19.131: INFO: Creating deployment "nginx-deployment" May 6 19:14:19.162: INFO: Waiting for observed generation 1 May 6 19:14:21.529: INFO: Waiting for all required pods to come up May 6 19:14:22.173: INFO: Pod name nginx: Found 10 pods out of 10 STEP: ensuring each pod is running May 6 19:14:38.840: INFO: Waiting for deployment "nginx-deployment" to complete May 6 19:14:38.845: INFO: Updating deployment "nginx-deployment" with a non-existent image May 6 19:14:38.851: INFO: Updating deployment nginx-deployment May 6 19:14:38.851: INFO: Waiting for observed generation 2 May 6 19:14:41.132: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 May 6 19:14:41.134: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 May 6 19:14:41.481: INFO: Waiting for the first rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas May 6 19:14:41.559: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 May 6 19:14:41.559: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 May 6 19:14:41.562: INFO: Waiting for the second rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas May 6 19:14:41.566: INFO: Verifying that deployment "nginx-deployment" has minimum required number of available replicas May 6 19:14:41.566: INFO: Scaling up the deployment "nginx-deployment" from 10 to 30 May 6 19:14:41.572: INFO: Updating deployment nginx-deployment May 6 19:14:41.572: INFO: Waiting for the replicasets of deployment "nginx-deployment" to have desired number of replicas May 6 19:14:41.893: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 May 6 19:14:41.933: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59 May 6 19:14:42.311: INFO: Deployment "nginx-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment,GenerateName:,Namespace:e2e-tests-deployment-g575v,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-g575v/deployments/nginx-deployment,UID:c8fd50e6-8fcd-11ea-99e8-0242ac110002,ResourceVersion:9105030,Generation:3,CreationTimestamp:2020-05-06 19:14:19 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*30,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:13,UpdatedReplicas:5,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[{Progressing True 2020-05-06 19:14:39 +0000 UTC 2020-05-06 19:14:19 +0000 UTC ReplicaSetUpdated ReplicaSet "nginx-deployment-5c98f8fb5" is progressing.} {Available False 2020-05-06 19:14:41 +0000 UTC 2020-05-06 19:14:41 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.}],ReadyReplicas:8,CollisionCount:nil,},} May 6 19:14:42.448: INFO: New ReplicaSet "nginx-deployment-5c98f8fb5" of Deployment "nginx-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5,GenerateName:,Namespace:e2e-tests-deployment-g575v,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-g575v/replicasets/nginx-deployment-5c98f8fb5,UID:d4be3a47-8fcd-11ea-99e8-0242ac110002,ResourceVersion:9105074,Generation:3,CreationTimestamp:2020-05-06 19:14:38 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment nginx-deployment c8fd50e6-8fcd-11ea-99e8-0242ac110002 0xc00205ae57 0xc00205ae58}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*13,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:13,FullyLabeledReplicas:13,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} May 6 19:14:42.448: INFO: All old ReplicaSets of Deployment "nginx-deployment": May 6 19:14:42.448: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d,GenerateName:,Namespace:e2e-tests-deployment-g575v,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-g575v/replicasets/nginx-deployment-85ddf47c5d,UID:c9130aaf-8fcd-11ea-99e8-0242ac110002,ResourceVersion:9105058,Generation:3,CreationTimestamp:2020-05-06 19:14:19 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment nginx-deployment c8fd50e6-8fcd-11ea-99e8-0242ac110002 0xc00205af57 0xc00205af58}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*20,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:8,FullyLabeledReplicas:8,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[],},} May 6 19:14:43.271: INFO: Pod "nginx-deployment-5c98f8fb5-68p6g" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-68p6g,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-g575v,SelfLink:/api/v1/namespaces/e2e-tests-deployment-g575v/pods/nginx-deployment-5c98f8fb5-68p6g,UID:d4f2dc2f-8fcd-11ea-99e8-0242ac110002,ResourceVersion:9105009,Generation:0,CreationTimestamp:2020-05-06 19:14:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 d4be3a47-8fcd-11ea-99e8-0242ac110002 0xc00186d657 0xc00186d658}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-9v46j {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-9v46j,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-9v46j true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00186d6d0} {node.kubernetes.io/unreachable Exists NoExecute 0xc00186d6f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 19:14:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-06 19:14:39 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-06 19:14:39 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 19:14:39 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.4,PodIP:,StartTime:2020-05-06 19:14:39 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 6 19:14:43.272: INFO: Pod "nginx-deployment-5c98f8fb5-6jvtq" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-6jvtq,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-g575v,SelfLink:/api/v1/namespaces/e2e-tests-deployment-g575v/pods/nginx-deployment-5c98f8fb5-6jvtq,UID:d68e7bf4-8fcd-11ea-99e8-0242ac110002,ResourceVersion:9105080,Generation:0,CreationTimestamp:2020-05-06 19:14:41 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 d4be3a47-8fcd-11ea-99e8-0242ac110002 0xc0022c64d0 0xc0022c64d1}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-9v46j {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-9v46j,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-9v46j true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0022c6550} {node.kubernetes.io/unreachable Exists NoExecute 0xc0022c6570}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 19:14:42 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-06 19:14:42 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-06 19:14:42 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 19:14:41 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.3,PodIP:,StartTime:2020-05-06 19:14:42 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 6 19:14:43.272: INFO: Pod "nginx-deployment-5c98f8fb5-dstkv" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-dstkv,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-g575v,SelfLink:/api/v1/namespaces/e2e-tests-deployment-g575v/pods/nginx-deployment-5c98f8fb5-dstkv,UID:d4c3b18c-8fcd-11ea-99e8-0242ac110002,ResourceVersion:9105002,Generation:0,CreationTimestamp:2020-05-06 19:14:38 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 d4be3a47-8fcd-11ea-99e8-0242ac110002 0xc0022c6740 0xc0022c6741}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-9v46j {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-9v46j,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-9v46j true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0022c6900} {node.kubernetes.io/unreachable Exists NoExecute 0xc0022c6920}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 19:14:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-06 19:14:39 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-06 19:14:39 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 19:14:38 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.4,PodIP:,StartTime:2020-05-06 19:14:39 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 6 19:14:43.272: INFO: Pod "nginx-deployment-5c98f8fb5-jc65b" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-jc65b,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-g575v,SelfLink:/api/v1/namespaces/e2e-tests-deployment-g575v/pods/nginx-deployment-5c98f8fb5-jc65b,UID:d4c3aeac-8fcd-11ea-99e8-0242ac110002,ResourceVersion:9104979,Generation:0,CreationTimestamp:2020-05-06 19:14:38 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 d4be3a47-8fcd-11ea-99e8-0242ac110002 0xc0022c6a30 0xc0022c6a31}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-9v46j {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-9v46j,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-9v46j true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0022c6af0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0022c6b10}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 19:14:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-06 19:14:39 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-06 19:14:39 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 19:14:38 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.4,PodIP:,StartTime:2020-05-06 19:14:39 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 6 19:14:43.272: INFO: Pod "nginx-deployment-5c98f8fb5-kxzgv" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-kxzgv,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-g575v,SelfLink:/api/v1/namespaces/e2e-tests-deployment-g575v/pods/nginx-deployment-5c98f8fb5-kxzgv,UID:d4c20b89-8fcd-11ea-99e8-0242ac110002,ResourceVersion:9105001,Generation:0,CreationTimestamp:2020-05-06 19:14:38 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 d4be3a47-8fcd-11ea-99e8-0242ac110002 0xc0022c6d80 0xc0022c6d81}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-9v46j {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-9v46j,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-9v46j true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0022c6e30} {node.kubernetes.io/unreachable Exists NoExecute 0xc0022c6e60}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 19:14:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-06 19:14:38 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-06 19:14:38 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 19:14:38 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.3,PodIP:,StartTime:2020-05-06 19:14:38 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 6 19:14:43.272: INFO: Pod "nginx-deployment-5c98f8fb5-qgg8q" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-qgg8q,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-g575v,SelfLink:/api/v1/namespaces/e2e-tests-deployment-g575v/pods/nginx-deployment-5c98f8fb5-qgg8q,UID:d6ae077b-8fcd-11ea-99e8-0242ac110002,ResourceVersion:9105062,Generation:0,CreationTimestamp:2020-05-06 19:14:42 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 d4be3a47-8fcd-11ea-99e8-0242ac110002 0xc0022c7060 0xc0022c7061}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-9v46j {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-9v46j,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-9v46j true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0022c72a0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0022c72c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 19:14:42 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 6 19:14:43.272: INFO: Pod "nginx-deployment-5c98f8fb5-s99pg" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-s99pg,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-g575v,SelfLink:/api/v1/namespaces/e2e-tests-deployment-g575v/pods/nginx-deployment-5c98f8fb5-s99pg,UID:d4ed8af1-8fcd-11ea-99e8-0242ac110002,ResourceVersion:9105003,Generation:0,CreationTimestamp:2020-05-06 19:14:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 d4be3a47-8fcd-11ea-99e8-0242ac110002 0xc0022c7347 0xc0022c7348}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-9v46j {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-9v46j,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-9v46j true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0022c74c0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0022c74e0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 19:14:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-06 19:14:39 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-06 19:14:39 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 19:14:39 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.3,PodIP:,StartTime:2020-05-06 19:14:39 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 6 19:14:43.272: INFO: Pod "nginx-deployment-5c98f8fb5-sskgl" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-sskgl,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-g575v,SelfLink:/api/v1/namespaces/e2e-tests-deployment-g575v/pods/nginx-deployment-5c98f8fb5-sskgl,UID:d694a887-8fcd-11ea-99e8-0242ac110002,ResourceVersion:9105039,Generation:0,CreationTimestamp:2020-05-06 19:14:41 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 d4be3a47-8fcd-11ea-99e8-0242ac110002 0xc0022c75b0 0xc0022c75b1}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-9v46j {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-9v46j,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-9v46j true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0022c7770} {node.kubernetes.io/unreachable Exists NoExecute 0xc0022c7790}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 19:14:42 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 6 19:14:43.272: INFO: Pod "nginx-deployment-5c98f8fb5-t7mww" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-t7mww,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-g575v,SelfLink:/api/v1/namespaces/e2e-tests-deployment-g575v/pods/nginx-deployment-5c98f8fb5-t7mww,UID:d6b07521-8fcd-11ea-99e8-0242ac110002,ResourceVersion:9105069,Generation:0,CreationTimestamp:2020-05-06 19:14:42 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 d4be3a47-8fcd-11ea-99e8-0242ac110002 0xc0022c7807 0xc0022c7808}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-9v46j {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-9v46j,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-9v46j true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0022c79c0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0022c79e0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 19:14:42 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 6 19:14:43.273: INFO: Pod "nginx-deployment-5c98f8fb5-wzl7n" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-wzl7n,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-g575v,SelfLink:/api/v1/namespaces/e2e-tests-deployment-g575v/pods/nginx-deployment-5c98f8fb5-wzl7n,UID:d6ae099e-8fcd-11ea-99e8-0242ac110002,ResourceVersion:9105059,Generation:0,CreationTimestamp:2020-05-06 19:14:42 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 d4be3a47-8fcd-11ea-99e8-0242ac110002 0xc0022c7a57 0xc0022c7a58}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-9v46j {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-9v46j,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-9v46j true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0022c7ae0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0022c7b00}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 19:14:42 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 6 19:14:43.273: INFO: Pod "nginx-deployment-5c98f8fb5-xrf6p" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-xrf6p,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-g575v,SelfLink:/api/v1/namespaces/e2e-tests-deployment-g575v/pods/nginx-deployment-5c98f8fb5-xrf6p,UID:d6adb82a-8fcd-11ea-99e8-0242ac110002,ResourceVersion:9105056,Generation:0,CreationTimestamp:2020-05-06 19:14:42 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 d4be3a47-8fcd-11ea-99e8-0242ac110002 0xc0022c7c37 0xc0022c7c38}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-9v46j {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-9v46j,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-9v46j true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0022c7cb0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0022c7cd0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 19:14:42 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 6 19:14:43.273: INFO: Pod "nginx-deployment-5c98f8fb5-xzztk" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-xzztk,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-g575v,SelfLink:/api/v1/namespaces/e2e-tests-deployment-g575v/pods/nginx-deployment-5c98f8fb5-xzztk,UID:d694d795-8fcd-11ea-99e8-0242ac110002,ResourceVersion:9105051,Generation:0,CreationTimestamp:2020-05-06 19:14:41 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 d4be3a47-8fcd-11ea-99e8-0242ac110002 0xc0022c7dd7 0xc0022c7dd8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-9v46j {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-9v46j,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-9v46j true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0023ba090} {node.kubernetes.io/unreachable Exists NoExecute 0xc0023ba0b0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 19:14:42 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 6 19:14:43.273: INFO: Pod "nginx-deployment-5c98f8fb5-zpxgs" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-zpxgs,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-g575v,SelfLink:/api/v1/namespaces/e2e-tests-deployment-g575v/pods/nginx-deployment-5c98f8fb5-zpxgs,UID:d6ad9407-8fcd-11ea-99e8-0242ac110002,ResourceVersion:9105055,Generation:0,CreationTimestamp:2020-05-06 19:14:42 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 d4be3a47-8fcd-11ea-99e8-0242ac110002 0xc0023ba127 0xc0023ba128}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-9v46j {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-9v46j,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-9v46j true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0023ba1b0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0023ba1d0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 19:14:42 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 6 19:14:43.273: INFO: Pod "nginx-deployment-85ddf47c5d-5bz7q" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-5bz7q,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-g575v,SelfLink:/api/v1/namespaces/e2e-tests-deployment-g575v/pods/nginx-deployment-85ddf47c5d-5bz7q,UID:d6adf9df-8fcd-11ea-99e8-0242ac110002,ResourceVersion:9105060,Generation:0,CreationTimestamp:2020-05-06 19:14:42 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d c9130aaf-8fcd-11ea-99e8-0242ac110002 0xc0023ba887 0xc0023ba888}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-9v46j {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-9v46j,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-9v46j true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0023ba960} {node.kubernetes.io/unreachable Exists NoExecute 0xc0023ba980}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 19:14:42 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 6 19:14:43.273: INFO: Pod "nginx-deployment-85ddf47c5d-6hb2f" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-6hb2f,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-g575v,SelfLink:/api/v1/namespaces/e2e-tests-deployment-g575v/pods/nginx-deployment-85ddf47c5d-6hb2f,UID:c97a5457-8fcd-11ea-99e8-0242ac110002,ResourceVersion:9104914,Generation:0,CreationTimestamp:2020-05-06 19:14:19 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d c9130aaf-8fcd-11ea-99e8-0242ac110002 0xc0023ba9f7 0xc0023ba9f8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-9v46j {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-9v46j,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-9v46j true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0023badd0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0023badf0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 19:14:21 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 19:14:36 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 19:14:36 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 19:14:21 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.4,PodIP:10.244.2.133,StartTime:2020-05-06 19:14:21 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-05-06 19:14:35 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://27030760e5eb19bbc2c0136b1728a6f56aff2f9de52c44774430d043900cf18f}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 6 19:14:43.274: INFO: Pod "nginx-deployment-85ddf47c5d-6zdcd" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-6zdcd,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-g575v,SelfLink:/api/v1/namespaces/e2e-tests-deployment-g575v/pods/nginx-deployment-85ddf47c5d-6zdcd,UID:d694bba1-8fcd-11ea-99e8-0242ac110002,ResourceVersion:9105041,Generation:0,CreationTimestamp:2020-05-06 19:14:41 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d c9130aaf-8fcd-11ea-99e8-0242ac110002 0xc0023baf87 0xc0023baf88}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-9v46j {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-9v46j,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-9v46j true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0023bb0b0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0023bb0d0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 19:14:42 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 6 19:14:43.274: INFO: Pod "nginx-deployment-85ddf47c5d-7gtqv" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-7gtqv,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-g575v,SelfLink:/api/v1/namespaces/e2e-tests-deployment-g575v/pods/nginx-deployment-85ddf47c5d-7gtqv,UID:d6ae1df8-8fcd-11ea-99e8-0242ac110002,ResourceVersion:9105065,Generation:0,CreationTimestamp:2020-05-06 19:14:42 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d c9130aaf-8fcd-11ea-99e8-0242ac110002 0xc0023bb167 0xc0023bb168}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-9v46j {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-9v46j,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-9v46j true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0023bb1e0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0023bb200}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 19:14:42 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 6 19:14:43.274: INFO: Pod "nginx-deployment-85ddf47c5d-955sx" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-955sx,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-g575v,SelfLink:/api/v1/namespaces/e2e-tests-deployment-g575v/pods/nginx-deployment-85ddf47c5d-955sx,UID:d6ae094d-8fcd-11ea-99e8-0242ac110002,ResourceVersion:9105063,Generation:0,CreationTimestamp:2020-05-06 19:14:42 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d c9130aaf-8fcd-11ea-99e8-0242ac110002 0xc0023bb2e7 0xc0023bb2e8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-9v46j {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-9v46j,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-9v46j true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0023bb360} {node.kubernetes.io/unreachable Exists NoExecute 0xc0023bb380}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 19:14:42 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 6 19:14:43.274: INFO: Pod "nginx-deployment-85ddf47c5d-9dxfw" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-9dxfw,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-g575v,SelfLink:/api/v1/namespaces/e2e-tests-deployment-g575v/pods/nginx-deployment-85ddf47c5d-9dxfw,UID:d6adfa9e-8fcd-11ea-99e8-0242ac110002,ResourceVersion:9105061,Generation:0,CreationTimestamp:2020-05-06 19:14:42 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d c9130aaf-8fcd-11ea-99e8-0242ac110002 0xc0023bb477 0xc0023bb478}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-9v46j {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-9v46j,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-9v46j true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0023bb590} {node.kubernetes.io/unreachable Exists NoExecute 0xc0023bb5b0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 19:14:42 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 6 19:14:43.274: INFO: Pod "nginx-deployment-85ddf47c5d-c6qs6" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-c6qs6,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-g575v,SelfLink:/api/v1/namespaces/e2e-tests-deployment-g575v/pods/nginx-deployment-85ddf47c5d-c6qs6,UID:d6792deb-8fcd-11ea-99e8-0242ac110002,ResourceVersion:9105072,Generation:0,CreationTimestamp:2020-05-06 19:14:41 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d c9130aaf-8fcd-11ea-99e8-0242ac110002 0xc0023bb627 0xc0023bb628}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-9v46j {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-9v46j,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-9v46j true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0023bb800} {node.kubernetes.io/unreachable Exists NoExecute 0xc0023bb820}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 19:14:41 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-06 19:14:41 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-06 19:14:41 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 19:14:41 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.3,PodIP:,StartTime:2020-05-06 19:14:41 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 6 19:14:43.274: INFO: Pod "nginx-deployment-85ddf47c5d-dbkpj" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-dbkpj,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-g575v,SelfLink:/api/v1/namespaces/e2e-tests-deployment-g575v/pods/nginx-deployment-85ddf47c5d-dbkpj,UID:d694ebe8-8fcd-11ea-99e8-0242ac110002,ResourceVersion:9105047,Generation:0,CreationTimestamp:2020-05-06 19:14:41 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d c9130aaf-8fcd-11ea-99e8-0242ac110002 0xc0023bbc87 0xc0023bbc88}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-9v46j {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-9v46j,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-9v46j true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0023bbde0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0023bbe80}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 19:14:42 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 6 19:14:43.274: INFO: Pod "nginx-deployment-85ddf47c5d-fjgd6" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-fjgd6,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-g575v,SelfLink:/api/v1/namespaces/e2e-tests-deployment-g575v/pods/nginx-deployment-85ddf47c5d-fjgd6,UID:ca36df15-8fcd-11ea-99e8-0242ac110002,ResourceVersion:9104934,Generation:0,CreationTimestamp:2020-05-06 19:14:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d c9130aaf-8fcd-11ea-99e8-0242ac110002 0xc0023bbef7 0xc0023bbef8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-9v46j {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-9v46j,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-9v46j true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00245a050} {node.kubernetes.io/unreachable Exists NoExecute 0xc00245a070}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 19:14:22 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 19:14:37 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 19:14:37 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 19:14:21 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.3,PodIP:10.244.1.177,StartTime:2020-05-06 19:14:22 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-05-06 19:14:37 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://5c7d06d89707c1e102eea102c0b1b503cb2bb60e57a810a3584288898240d5d2}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 6 19:14:43.274: INFO: Pod "nginx-deployment-85ddf47c5d-j4b7t" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-j4b7t,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-g575v,SelfLink:/api/v1/namespaces/e2e-tests-deployment-g575v/pods/nginx-deployment-85ddf47c5d-j4b7t,UID:c97a5fca-8fcd-11ea-99e8-0242ac110002,ResourceVersion:9104946,Generation:0,CreationTimestamp:2020-05-06 19:14:19 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d c9130aaf-8fcd-11ea-99e8-0242ac110002 0xc00245a157 0xc00245a158}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-9v46j {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-9v46j,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-9v46j true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00245a1d0} {node.kubernetes.io/unreachable Exists NoExecute 0xc00245a1f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 19:14:22 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 19:14:37 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 19:14:37 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 19:14:21 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.4,PodIP:10.244.2.134,StartTime:2020-05-06 19:14:22 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-05-06 19:14:35 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://751b4e45c933ad23436601a848d44a714f83de3728c31b864b3e009b4bd27bb6}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 6 19:14:43.275: INFO: Pod "nginx-deployment-85ddf47c5d-mcvwj" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-mcvwj,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-g575v,SelfLink:/api/v1/namespaces/e2e-tests-deployment-g575v/pods/nginx-deployment-85ddf47c5d-mcvwj,UID:d68eabf7-8fcd-11ea-99e8-0242ac110002,ResourceVersion:9105078,Generation:0,CreationTimestamp:2020-05-06 19:14:41 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d c9130aaf-8fcd-11ea-99e8-0242ac110002 0xc00245aa87 0xc00245aa88}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-9v46j {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-9v46j,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-9v46j true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00245ab00} {node.kubernetes.io/unreachable Exists NoExecute 0xc00245ab20}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 19:14:42 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-06 19:14:42 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-06 19:14:42 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 19:14:41 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.4,PodIP:,StartTime:2020-05-06 19:14:42 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 6 19:14:43.275: INFO: Pod "nginx-deployment-85ddf47c5d-mw99f" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-mw99f,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-g575v,SelfLink:/api/v1/namespaces/e2e-tests-deployment-g575v/pods/nginx-deployment-85ddf47c5d-mw99f,UID:ca36ead7-8fcd-11ea-99e8-0242ac110002,ResourceVersion:9104929,Generation:0,CreationTimestamp:2020-05-06 19:14:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d c9130aaf-8fcd-11ea-99e8-0242ac110002 0xc00245abd7 0xc00245abd8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-9v46j {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-9v46j,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-9v46j true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00245afa0} {node.kubernetes.io/unreachable Exists NoExecute 0xc00245afc0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 19:14:22 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 19:14:37 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 19:14:37 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 19:14:21 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.4,PodIP:10.244.2.136,StartTime:2020-05-06 19:14:22 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-05-06 19:14:36 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://106f39d5cd1104e0f3c267f6ad52592c0e89f9365ff47595c8dae727f5855090}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 6 19:14:43.275: INFO: Pod "nginx-deployment-85ddf47c5d-s87v2" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-s87v2,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-g575v,SelfLink:/api/v1/namespaces/e2e-tests-deployment-g575v/pods/nginx-deployment-85ddf47c5d-s87v2,UID:c979d5a7-8fcd-11ea-99e8-0242ac110002,ResourceVersion:9104915,Generation:0,CreationTimestamp:2020-05-06 19:14:19 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d c9130aaf-8fcd-11ea-99e8-0242ac110002 0xc00245b277 0xc00245b278}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-9v46j {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-9v46j,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-9v46j true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00245b440} {node.kubernetes.io/unreachable Exists NoExecute 0xc00245b460}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 19:14:21 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 19:14:36 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 19:14:36 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 19:14:19 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.3,PodIP:10.244.1.174,StartTime:2020-05-06 19:14:21 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-05-06 19:14:35 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://4f16c5f27eee0ab94e6c5a0451c952fce939895b8f7855361a0caf1416172678}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 6 19:14:43.275: INFO: Pod "nginx-deployment-85ddf47c5d-sqfrl" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-sqfrl,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-g575v,SelfLink:/api/v1/namespaces/e2e-tests-deployment-g575v/pods/nginx-deployment-85ddf47c5d-sqfrl,UID:d6ae13ec-8fcd-11ea-99e8-0242ac110002,ResourceVersion:9105064,Generation:0,CreationTimestamp:2020-05-06 19:14:42 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d c9130aaf-8fcd-11ea-99e8-0242ac110002 0xc00245b567 0xc00245b568}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-9v46j {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-9v46j,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-9v46j true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00245b5e0} {node.kubernetes.io/unreachable Exists NoExecute 0xc00245b600}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 19:14:42 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 6 19:14:43.275: INFO: Pod "nginx-deployment-85ddf47c5d-tdmp9" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-tdmp9,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-g575v,SelfLink:/api/v1/namespaces/e2e-tests-deployment-g575v/pods/nginx-deployment-85ddf47c5d-tdmp9,UID:ca36f157-8fcd-11ea-99e8-0242ac110002,ResourceVersion:9104950,Generation:0,CreationTimestamp:2020-05-06 19:14:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d c9130aaf-8fcd-11ea-99e8-0242ac110002 0xc00245b6a7 0xc00245b6a8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-9v46j {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-9v46j,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-9v46j true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00245b720} {node.kubernetes.io/unreachable Exists NoExecute 0xc00245b740}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 19:14:22 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 19:14:37 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 19:14:37 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 19:14:21 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.3,PodIP:10.244.1.175,StartTime:2020-05-06 19:14:22 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-05-06 19:14:35 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://742789dd61b9e101a1c824b460388abb1e55124fd74464b503ca78fbcc1cb433}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 6 19:14:43.275: INFO: Pod "nginx-deployment-85ddf47c5d-tfk48" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-tfk48,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-g575v,SelfLink:/api/v1/namespaces/e2e-tests-deployment-g575v/pods/nginx-deployment-85ddf47c5d-tfk48,UID:ca5f4465-8fcd-11ea-99e8-0242ac110002,ResourceVersion:9104939,Generation:0,CreationTimestamp:2020-05-06 19:14:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d c9130aaf-8fcd-11ea-99e8-0242ac110002 0xc00245b9f7 0xc00245b9f8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-9v46j {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-9v46j,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-9v46j true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00245ba70} {node.kubernetes.io/unreachable Exists NoExecute 0xc00245bc00}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 19:14:22 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 19:14:37 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 19:14:37 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 19:14:21 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.3,PodIP:10.244.1.178,StartTime:2020-05-06 19:14:22 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-05-06 19:14:37 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://87721217806b422878020128dfc71a27cf6dac149048462bf8a372c1eab9d0de}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 6 19:14:43.275: INFO: Pod "nginx-deployment-85ddf47c5d-v47nn" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-v47nn,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-g575v,SelfLink:/api/v1/namespaces/e2e-tests-deployment-g575v/pods/nginx-deployment-85ddf47c5d-v47nn,UID:d694c48f-8fcd-11ea-99e8-0242ac110002,ResourceVersion:9105043,Generation:0,CreationTimestamp:2020-05-06 19:14:41 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d c9130aaf-8fcd-11ea-99e8-0242ac110002 0xc00245be97 0xc00245be98}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-9v46j {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-9v46j,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-9v46j true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0022bc2b0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0022bc2e0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 19:14:42 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 6 19:14:43.275: INFO: Pod "nginx-deployment-85ddf47c5d-vqgjs" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-vqgjs,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-g575v,SelfLink:/api/v1/namespaces/e2e-tests-deployment-g575v/pods/nginx-deployment-85ddf47c5d-vqgjs,UID:d68ed88a-8fcd-11ea-99e8-0242ac110002,ResourceVersion:9105032,Generation:0,CreationTimestamp:2020-05-06 19:14:41 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d c9130aaf-8fcd-11ea-99e8-0242ac110002 0xc0022bc7c7 0xc0022bc7c8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-9v46j {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-9v46j,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-9v46j true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0022bc840} {node.kubernetes.io/unreachable Exists NoExecute 0xc0022bc860}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 19:14:41 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 6 19:14:43.276: INFO: Pod "nginx-deployment-85ddf47c5d-wdqpc" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-wdqpc,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-g575v,SelfLink:/api/v1/namespaces/e2e-tests-deployment-g575v/pods/nginx-deployment-85ddf47c5d-wdqpc,UID:ca36eb5f-8fcd-11ea-99e8-0242ac110002,ResourceVersion:9104935,Generation:0,CreationTimestamp:2020-05-06 19:14:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d c9130aaf-8fcd-11ea-99e8-0242ac110002 0xc0022bc9d7 0xc0022bc9d8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-9v46j {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-9v46j,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-9v46j true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0022bca50} {node.kubernetes.io/unreachable Exists NoExecute 0xc0022bca70}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 19:14:22 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 19:14:37 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 19:14:37 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 19:14:21 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.4,PodIP:10.244.2.135,StartTime:2020-05-06 19:14:22 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-05-06 19:14:37 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://ccaeb145354361c5a14c79387e7fcfdbd34b9cc4593f82a62f8b943528e8c907}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 6 19:14:43.276: INFO: Pod "nginx-deployment-85ddf47c5d-wnqk9" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-wnqk9,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-g575v,SelfLink:/api/v1/namespaces/e2e-tests-deployment-g575v/pods/nginx-deployment-85ddf47c5d-wnqk9,UID:d694ec27-8fcd-11ea-99e8-0242ac110002,ResourceVersion:9105044,Generation:0,CreationTimestamp:2020-05-06 19:14:41 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d c9130aaf-8fcd-11ea-99e8-0242ac110002 0xc0022bcbb7 0xc0022bcbb8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-9v46j {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-9v46j,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-9v46j true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0022bce60} {node.kubernetes.io/unreachable Exists NoExecute 0xc0022bce80}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 19:14:42 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 19:14:43.276: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-deployment-g575v" for this suite. May 6 19:15:12.742: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 19:15:12.811: INFO: namespace: e2e-tests-deployment-g575v, resource: bindings, ignored listing per whitelist May 6 19:15:12.818: INFO: namespace e2e-tests-deployment-g575v deletion completed in 29.485579712s • [SLOW TEST:55.075 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 19:15:12.818: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-map-e95a7286-8fcd-11ea-a618-0242ac110019 STEP: Creating a pod to test consume configMaps May 6 19:15:13.725: INFO: Waiting up to 5m0s for pod "pod-configmaps-e97f4782-8fcd-11ea-a618-0242ac110019" in namespace "e2e-tests-configmap-q4nlk" to be "success or failure" May 6 19:15:13.845: INFO: Pod "pod-configmaps-e97f4782-8fcd-11ea-a618-0242ac110019": Phase="Pending", Reason="", readiness=false. Elapsed: 120.154836ms May 6 19:15:16.059: INFO: Pod "pod-configmaps-e97f4782-8fcd-11ea-a618-0242ac110019": Phase="Pending", Reason="", readiness=false. Elapsed: 2.333700869s May 6 19:15:18.063: INFO: Pod "pod-configmaps-e97f4782-8fcd-11ea-a618-0242ac110019": Phase="Pending", Reason="", readiness=false. Elapsed: 4.337851859s May 6 19:15:20.174: INFO: Pod "pod-configmaps-e97f4782-8fcd-11ea-a618-0242ac110019": Phase="Pending", Reason="", readiness=false. Elapsed: 6.44930063s May 6 19:15:22.177: INFO: Pod "pod-configmaps-e97f4782-8fcd-11ea-a618-0242ac110019": Phase="Running", Reason="", readiness=true. Elapsed: 8.451933392s May 6 19:15:24.181: INFO: Pod "pod-configmaps-e97f4782-8fcd-11ea-a618-0242ac110019": Phase="Running", Reason="", readiness=true. Elapsed: 10.456177183s May 6 19:15:26.493: INFO: Pod "pod-configmaps-e97f4782-8fcd-11ea-a618-0242ac110019": Phase="Running", Reason="", readiness=true. Elapsed: 12.768300019s May 6 19:15:28.498: INFO: Pod "pod-configmaps-e97f4782-8fcd-11ea-a618-0242ac110019": Phase="Running", Reason="", readiness=true. Elapsed: 14.77249383s May 6 19:15:30.502: INFO: Pod "pod-configmaps-e97f4782-8fcd-11ea-a618-0242ac110019": Phase="Running", Reason="", readiness=true. Elapsed: 16.776894394s May 6 19:15:32.506: INFO: Pod "pod-configmaps-e97f4782-8fcd-11ea-a618-0242ac110019": Phase="Succeeded", Reason="", readiness=false. Elapsed: 18.781049971s STEP: Saw pod success May 6 19:15:32.506: INFO: Pod "pod-configmaps-e97f4782-8fcd-11ea-a618-0242ac110019" satisfied condition "success or failure" May 6 19:15:32.509: INFO: Trying to get logs from node hunter-worker pod pod-configmaps-e97f4782-8fcd-11ea-a618-0242ac110019 container configmap-volume-test: STEP: delete the pod May 6 19:15:32.879: INFO: Waiting for pod pod-configmaps-e97f4782-8fcd-11ea-a618-0242ac110019 to disappear May 6 19:15:33.120: INFO: Pod pod-configmaps-e97f4782-8fcd-11ea-a618-0242ac110019 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 19:15:33.120: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-q4nlk" for this suite. May 6 19:15:41.600: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 19:15:41.630: INFO: namespace: e2e-tests-configmap-q4nlk, resource: bindings, ignored listing per whitelist May 6 19:15:41.664: INFO: namespace e2e-tests-configmap-q4nlk deletion completed in 8.54028292s • [SLOW TEST:28.846 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 19:15:41.664: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin May 6 19:15:42.124: INFO: Waiting up to 5m0s for pod "downwardapi-volume-fa712f7f-8fcd-11ea-a618-0242ac110019" in namespace "e2e-tests-projected-5fm2d" to be "success or failure" May 6 19:15:42.134: INFO: Pod "downwardapi-volume-fa712f7f-8fcd-11ea-a618-0242ac110019": Phase="Pending", Reason="", readiness=false. Elapsed: 9.612617ms May 6 19:15:44.138: INFO: Pod "downwardapi-volume-fa712f7f-8fcd-11ea-a618-0242ac110019": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013582153s May 6 19:15:46.254: INFO: Pod "downwardapi-volume-fa712f7f-8fcd-11ea-a618-0242ac110019": Phase="Pending", Reason="", readiness=false. Elapsed: 4.129504713s May 6 19:15:49.080: INFO: Pod "downwardapi-volume-fa712f7f-8fcd-11ea-a618-0242ac110019": Phase="Pending", Reason="", readiness=false. Elapsed: 6.955451923s May 6 19:15:51.084: INFO: Pod "downwardapi-volume-fa712f7f-8fcd-11ea-a618-0242ac110019": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.959992569s STEP: Saw pod success May 6 19:15:51.084: INFO: Pod "downwardapi-volume-fa712f7f-8fcd-11ea-a618-0242ac110019" satisfied condition "success or failure" May 6 19:15:51.087: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-fa712f7f-8fcd-11ea-a618-0242ac110019 container client-container: STEP: delete the pod May 6 19:15:51.200: INFO: Waiting for pod downwardapi-volume-fa712f7f-8fcd-11ea-a618-0242ac110019 to disappear May 6 19:15:51.217: INFO: Pod downwardapi-volume-fa712f7f-8fcd-11ea-a618-0242ac110019 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 19:15:51.217: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-5fm2d" for this suite. May 6 19:15:59.280: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 19:15:59.307: INFO: namespace: e2e-tests-projected-5fm2d, resource: bindings, ignored listing per whitelist May 6 19:15:59.346: INFO: namespace e2e-tests-projected-5fm2d deletion completed in 8.098191337s • [SLOW TEST:17.682 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 19:15:59.346: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with secret that has name projected-secret-test-map-04c44ff9-8fce-11ea-a618-0242ac110019 STEP: Creating a pod to test consume secrets May 6 19:15:59.447: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-04c6df16-8fce-11ea-a618-0242ac110019" in namespace "e2e-tests-projected-7kh9v" to be "success or failure" May 6 19:15:59.511: INFO: Pod "pod-projected-secrets-04c6df16-8fce-11ea-a618-0242ac110019": Phase="Pending", Reason="", readiness=false. Elapsed: 63.826219ms May 6 19:16:01.672: INFO: Pod "pod-projected-secrets-04c6df16-8fce-11ea-a618-0242ac110019": Phase="Pending", Reason="", readiness=false. Elapsed: 2.225486594s May 6 19:16:03.739: INFO: Pod "pod-projected-secrets-04c6df16-8fce-11ea-a618-0242ac110019": Phase="Running", Reason="", readiness=true. Elapsed: 4.291597331s May 6 19:16:05.742: INFO: Pod "pod-projected-secrets-04c6df16-8fce-11ea-a618-0242ac110019": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.295261382s STEP: Saw pod success May 6 19:16:05.742: INFO: Pod "pod-projected-secrets-04c6df16-8fce-11ea-a618-0242ac110019" satisfied condition "success or failure" May 6 19:16:05.744: INFO: Trying to get logs from node hunter-worker pod pod-projected-secrets-04c6df16-8fce-11ea-a618-0242ac110019 container projected-secret-volume-test: STEP: delete the pod May 6 19:16:05.836: INFO: Waiting for pod pod-projected-secrets-04c6df16-8fce-11ea-a618-0242ac110019 to disappear May 6 19:16:05.853: INFO: Pod pod-projected-secrets-04c6df16-8fce-11ea-a618-0242ac110019 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 19:16:05.853: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-7kh9v" for this suite. May 6 19:16:11.924: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 19:16:11.981: INFO: namespace: e2e-tests-projected-7kh9v, resource: bindings, ignored listing per whitelist May 6 19:16:11.993: INFO: namespace e2e-tests-projected-7kh9v deletion completed in 6.136287037s • [SLOW TEST:12.646 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 19:16:11.993: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin May 6 19:16:12.357: INFO: Waiting up to 5m0s for pod "downwardapi-volume-0c72af14-8fce-11ea-a618-0242ac110019" in namespace "e2e-tests-downward-api-9fj9j" to be "success or failure" May 6 19:16:12.529: INFO: Pod "downwardapi-volume-0c72af14-8fce-11ea-a618-0242ac110019": Phase="Pending", Reason="", readiness=false. Elapsed: 171.712888ms May 6 19:16:14.533: INFO: Pod "downwardapi-volume-0c72af14-8fce-11ea-a618-0242ac110019": Phase="Pending", Reason="", readiness=false. Elapsed: 2.175505673s May 6 19:16:16.565: INFO: Pod "downwardapi-volume-0c72af14-8fce-11ea-a618-0242ac110019": Phase="Pending", Reason="", readiness=false. Elapsed: 4.207070678s May 6 19:16:18.744: INFO: Pod "downwardapi-volume-0c72af14-8fce-11ea-a618-0242ac110019": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.386560615s STEP: Saw pod success May 6 19:16:18.744: INFO: Pod "downwardapi-volume-0c72af14-8fce-11ea-a618-0242ac110019" satisfied condition "success or failure" May 6 19:16:18.747: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-0c72af14-8fce-11ea-a618-0242ac110019 container client-container: STEP: delete the pod May 6 19:16:18.770: INFO: Waiting for pod downwardapi-volume-0c72af14-8fce-11ea-a618-0242ac110019 to disappear May 6 19:16:18.804: INFO: Pod downwardapi-volume-0c72af14-8fce-11ea-a618-0242ac110019 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 19:16:18.805: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-9fj9j" for this suite. May 6 19:16:25.137: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 19:16:25.203: INFO: namespace: e2e-tests-downward-api-9fj9j, resource: bindings, ignored listing per whitelist May 6 19:16:25.210: INFO: namespace e2e-tests-downward-api-9fj9j deletion completed in 6.40144455s • [SLOW TEST:13.217 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 19:16:25.210: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin May 6 19:16:25.951: INFO: Waiting up to 5m0s for pod "downwardapi-volume-149268a3-8fce-11ea-a618-0242ac110019" in namespace "e2e-tests-downward-api-pctwz" to be "success or failure" May 6 19:16:26.164: INFO: Pod "downwardapi-volume-149268a3-8fce-11ea-a618-0242ac110019": Phase="Pending", Reason="", readiness=false. Elapsed: 213.054772ms May 6 19:16:28.278: INFO: Pod "downwardapi-volume-149268a3-8fce-11ea-a618-0242ac110019": Phase="Pending", Reason="", readiness=false. Elapsed: 2.326911946s May 6 19:16:30.281: INFO: Pod "downwardapi-volume-149268a3-8fce-11ea-a618-0242ac110019": Phase="Running", Reason="", readiness=true. Elapsed: 4.330423045s May 6 19:16:32.285: INFO: Pod "downwardapi-volume-149268a3-8fce-11ea-a618-0242ac110019": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.334143449s STEP: Saw pod success May 6 19:16:32.285: INFO: Pod "downwardapi-volume-149268a3-8fce-11ea-a618-0242ac110019" satisfied condition "success or failure" May 6 19:16:32.288: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-149268a3-8fce-11ea-a618-0242ac110019 container client-container: STEP: delete the pod May 6 19:16:32.370: INFO: Waiting for pod downwardapi-volume-149268a3-8fce-11ea-a618-0242ac110019 to disappear May 6 19:16:32.401: INFO: Pod downwardapi-volume-149268a3-8fce-11ea-a618-0242ac110019 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 19:16:32.401: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-pctwz" for this suite. May 6 19:16:38.429: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 19:16:38.470: INFO: namespace: e2e-tests-downward-api-pctwz, resource: bindings, ignored listing per whitelist May 6 19:16:39.697: INFO: namespace e2e-tests-downward-api-pctwz deletion completed in 7.292374435s • [SLOW TEST:14.487 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl patch should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 19:16:39.697: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating Redis RC May 6 19:16:39.924: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-hht57' May 6 19:16:42.490: INFO: stderr: "" May 6 19:16:42.490: INFO: stdout: "replicationcontroller/redis-master created\n" STEP: Waiting for Redis master to start. May 6 19:16:43.494: INFO: Selector matched 1 pods for map[app:redis] May 6 19:16:43.494: INFO: Found 0 / 1 May 6 19:16:45.372: INFO: Selector matched 1 pods for map[app:redis] May 6 19:16:45.372: INFO: Found 0 / 1 May 6 19:16:45.554: INFO: Selector matched 1 pods for map[app:redis] May 6 19:16:45.554: INFO: Found 0 / 1 May 6 19:16:46.535: INFO: Selector matched 1 pods for map[app:redis] May 6 19:16:46.535: INFO: Found 0 / 1 May 6 19:16:47.494: INFO: Selector matched 1 pods for map[app:redis] May 6 19:16:47.494: INFO: Found 1 / 1 May 6 19:16:47.494: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 STEP: patching all pods May 6 19:16:47.497: INFO: Selector matched 1 pods for map[app:redis] May 6 19:16:47.497: INFO: ForEach: Found 1 pods from the filter. Now looping through them. May 6 19:16:47.497: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config patch pod redis-master-fs62x --namespace=e2e-tests-kubectl-hht57 -p {"metadata":{"annotations":{"x":"y"}}}' May 6 19:16:47.616: INFO: stderr: "" May 6 19:16:47.616: INFO: stdout: "pod/redis-master-fs62x patched\n" STEP: checking annotations May 6 19:16:47.691: INFO: Selector matched 1 pods for map[app:redis] May 6 19:16:47.691: INFO: ForEach: Found 1 pods from the filter. Now looping through them. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 19:16:47.691: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-hht57" for this suite. May 6 19:17:11.714: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 19:17:11.743: INFO: namespace: e2e-tests-kubectl-hht57, resource: bindings, ignored listing per whitelist May 6 19:17:11.822: INFO: namespace e2e-tests-kubectl-hht57 deletion completed in 24.126469077s • [SLOW TEST:32.125 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl patch /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSMay 6 19:17:11.822: INFO: Running AfterSuite actions on all nodes May 6 19:17:11.822: INFO: Running AfterSuite actions on node 1 May 6 19:17:11.822: INFO: Skipping dumping logs from cluster Ran 200 of 2164 Specs in 7234.800 seconds SUCCESS! -- 200 Passed | 0 Failed | 0 Pending | 1964 Skipped PASS