I0506 10:46:46.938718 7 e2e.go:224] Starting e2e run "e1c54bfb-8f86-11ea-b5fe-0242ac110017" on Ginkgo node 1 Running Suite: Kubernetes e2e suite =================================== Random Seed: 1588762006 - Will randomize all specs Will run 201 of 2164 specs May 6 10:46:47.138: INFO: >>> kubeConfig: /root/.kube/config May 6 10:46:47.144: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable May 6 10:46:47.159: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready May 6 10:46:47.186: INFO: 12 / 12 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) May 6 10:46:47.186: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. May 6 10:46:47.186: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start May 6 10:46:47.194: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) May 6 10:46:47.194: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) May 6 10:46:47.194: INFO: e2e test version: v1.13.12 May 6 10:46:47.196: INFO: kube-apiserver version: v1.13.12 SSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 10:46:47.196: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected May 6 10:46:47.313: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin May 6 10:46:47.336: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e24a9753-8f86-11ea-b5fe-0242ac110017" in namespace "e2e-tests-projected-hlkj9" to be "success or failure" May 6 10:46:47.357: INFO: Pod "downwardapi-volume-e24a9753-8f86-11ea-b5fe-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 21.509217ms May 6 10:46:49.460: INFO: Pod "downwardapi-volume-e24a9753-8f86-11ea-b5fe-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.124177498s May 6 10:46:51.463: INFO: Pod "downwardapi-volume-e24a9753-8f86-11ea-b5fe-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.127593013s STEP: Saw pod success May 6 10:46:51.463: INFO: Pod "downwardapi-volume-e24a9753-8f86-11ea-b5fe-0242ac110017" satisfied condition "success or failure" May 6 10:46:51.466: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-e24a9753-8f86-11ea-b5fe-0242ac110017 container client-container: STEP: delete the pod May 6 10:46:51.512: INFO: Waiting for pod downwardapi-volume-e24a9753-8f86-11ea-b5fe-0242ac110017 to disappear May 6 10:46:51.535: INFO: Pod downwardapi-volume-e24a9753-8f86-11ea-b5fe-0242ac110017 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 10:46:51.535: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-hlkj9" for this suite. May 6 10:46:57.583: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 10:46:57.607: INFO: namespace: e2e-tests-projected-hlkj9, resource: bindings, ignored listing per whitelist May 6 10:46:57.666: INFO: namespace e2e-tests-projected-hlkj9 deletion completed in 6.127078401s • [SLOW TEST:10.470 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 10:46:57.666: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Given a ReplicationController is created STEP: When the matched label of one of its pods change May 6 10:46:57.811: INFO: Pod name pod-release: Found 0 pods out of 1 May 6 10:47:02.815: INFO: Pod name pod-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 10:47:03.839: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-replication-controller-cd6tf" for this suite. May 6 10:47:09.893: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 10:47:09.919: INFO: namespace: e2e-tests-replication-controller-cd6tf, resource: bindings, ignored listing per whitelist May 6 10:47:09.966: INFO: namespace e2e-tests-replication-controller-cd6tf deletion completed in 6.124219028s • [SLOW TEST:12.300 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run pod should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 10:47:09.966: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1527 [It] should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine May 6 10:47:10.105: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=e2e-tests-kubectl-9v4pv' May 6 10:47:13.030: INFO: stderr: "" May 6 10:47:13.030: INFO: stdout: "pod/e2e-test-nginx-pod created\n" STEP: verifying the pod e2e-test-nginx-pod was created [AfterEach] [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1532 May 6 10:47:13.051: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=e2e-tests-kubectl-9v4pv' May 6 10:47:18.572: INFO: stderr: "" May 6 10:47:18.572: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 10:47:18.572: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-9v4pv" for this suite. May 6 10:47:24.635: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 10:47:24.651: INFO: namespace: e2e-tests-kubectl-9v4pv, resource: bindings, ignored listing per whitelist May 6 10:47:24.714: INFO: namespace e2e-tests-kubectl-9v4pv deletion completed in 6.125137743s • [SLOW TEST:14.747 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [k8s.io] Probing container should have monotonically increasing restart count [Slow][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 10:47:24.714: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] should have monotonically increasing restart count [Slow][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-p4cwr May 6 10:47:28.888: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-p4cwr STEP: checking the pod's current state and verifying that restartCount is present May 6 10:47:28.891: INFO: Initial restart count of pod liveness-http is 0 May 6 10:47:49.086: INFO: Restart count of pod e2e-tests-container-probe-p4cwr/liveness-http is now 1 (20.195716582s elapsed) May 6 10:48:09.131: INFO: Restart count of pod e2e-tests-container-probe-p4cwr/liveness-http is now 2 (40.240032762s elapsed) May 6 10:48:31.308: INFO: Restart count of pod e2e-tests-container-probe-p4cwr/liveness-http is now 3 (1m2.417074953s elapsed) May 6 10:48:49.429: INFO: Restart count of pod e2e-tests-container-probe-p4cwr/liveness-http is now 4 (1m20.538593452s elapsed) May 6 10:50:00.128: INFO: Restart count of pod e2e-tests-container-probe-p4cwr/liveness-http is now 5 (2m31.237360346s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 10:50:00.155: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-p4cwr" for this suite. May 6 10:50:06.196: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 10:50:06.230: INFO: namespace: e2e-tests-container-probe-p4cwr, resource: bindings, ignored listing per whitelist May 6 10:50:06.275: INFO: namespace e2e-tests-container-probe-p4cwr deletion completed in 6.09592317s • [SLOW TEST:161.562 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should have monotonically increasing restart count [Slow][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 10:50:06.276: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-58edf729-8f87-11ea-b5fe-0242ac110017 STEP: Creating a pod to test consume secrets May 6 10:50:06.380: INFO: Waiting up to 5m0s for pod "pod-secrets-58f03e56-8f87-11ea-b5fe-0242ac110017" in namespace "e2e-tests-secrets-czmds" to be "success or failure" May 6 10:50:06.383: INFO: Pod "pod-secrets-58f03e56-8f87-11ea-b5fe-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 3.506726ms May 6 10:50:08.388: INFO: Pod "pod-secrets-58f03e56-8f87-11ea-b5fe-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008280949s May 6 10:50:10.392: INFO: Pod "pod-secrets-58f03e56-8f87-11ea-b5fe-0242ac110017": Phase="Running", Reason="", readiness=true. Elapsed: 4.012464392s May 6 10:50:12.397: INFO: Pod "pod-secrets-58f03e56-8f87-11ea-b5fe-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.017253692s STEP: Saw pod success May 6 10:50:12.397: INFO: Pod "pod-secrets-58f03e56-8f87-11ea-b5fe-0242ac110017" satisfied condition "success or failure" May 6 10:50:12.400: INFO: Trying to get logs from node hunter-worker2 pod pod-secrets-58f03e56-8f87-11ea-b5fe-0242ac110017 container secret-volume-test: STEP: delete the pod May 6 10:50:12.438: INFO: Waiting for pod pod-secrets-58f03e56-8f87-11ea-b5fe-0242ac110017 to disappear May 6 10:50:12.442: INFO: Pod pod-secrets-58f03e56-8f87-11ea-b5fe-0242ac110017 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 10:50:12.442: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-czmds" for this suite. May 6 10:50:18.458: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 10:50:18.552: INFO: namespace: e2e-tests-secrets-czmds, resource: bindings, ignored listing per whitelist May 6 10:50:18.557: INFO: namespace e2e-tests-secrets-czmds deletion completed in 6.111551508s • [SLOW TEST:12.282 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 10:50:18.557: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on tmpfs should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir volume type on tmpfs May 6 10:50:18.696: INFO: Waiting up to 5m0s for pod "pod-6047de6b-8f87-11ea-b5fe-0242ac110017" in namespace "e2e-tests-emptydir-sv7j2" to be "success or failure" May 6 10:50:18.706: INFO: Pod "pod-6047de6b-8f87-11ea-b5fe-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 10.321302ms May 6 10:50:20.710: INFO: Pod "pod-6047de6b-8f87-11ea-b5fe-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013977034s May 6 10:50:22.714: INFO: Pod "pod-6047de6b-8f87-11ea-b5fe-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.018229436s STEP: Saw pod success May 6 10:50:22.714: INFO: Pod "pod-6047de6b-8f87-11ea-b5fe-0242ac110017" satisfied condition "success or failure" May 6 10:50:22.717: INFO: Trying to get logs from node hunter-worker2 pod pod-6047de6b-8f87-11ea-b5fe-0242ac110017 container test-container: STEP: delete the pod May 6 10:50:22.743: INFO: Waiting for pod pod-6047de6b-8f87-11ea-b5fe-0242ac110017 to disappear May 6 10:50:22.748: INFO: Pod pod-6047de6b-8f87-11ea-b5fe-0242ac110017 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 10:50:22.748: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-sv7j2" for this suite. May 6 10:50:28.814: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 10:50:28.851: INFO: namespace: e2e-tests-emptydir-sv7j2, resource: bindings, ignored listing per whitelist May 6 10:50:28.970: INFO: namespace e2e-tests-emptydir-sv7j2 deletion completed in 6.219079439s • [SLOW TEST:10.413 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 volume on tmpfs should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 10:50:28.970: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0666 on tmpfs May 6 10:50:29.060: INFO: Waiting up to 5m0s for pod "pod-6673beef-8f87-11ea-b5fe-0242ac110017" in namespace "e2e-tests-emptydir-2ws4p" to be "success or failure" May 6 10:50:29.083: INFO: Pod "pod-6673beef-8f87-11ea-b5fe-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 22.836459ms May 6 10:50:31.087: INFO: Pod "pod-6673beef-8f87-11ea-b5fe-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026877652s May 6 10:50:33.092: INFO: Pod "pod-6673beef-8f87-11ea-b5fe-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.031414693s STEP: Saw pod success May 6 10:50:33.092: INFO: Pod "pod-6673beef-8f87-11ea-b5fe-0242ac110017" satisfied condition "success or failure" May 6 10:50:33.095: INFO: Trying to get logs from node hunter-worker2 pod pod-6673beef-8f87-11ea-b5fe-0242ac110017 container test-container: STEP: delete the pod May 6 10:50:33.122: INFO: Waiting for pod pod-6673beef-8f87-11ea-b5fe-0242ac110017 to disappear May 6 10:50:33.126: INFO: Pod pod-6673beef-8f87-11ea-b5fe-0242ac110017 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 10:50:33.126: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-2ws4p" for this suite. May 6 10:50:39.158: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 10:50:39.176: INFO: namespace: e2e-tests-emptydir-2ws4p, resource: bindings, ignored listing per whitelist May 6 10:50:39.255: INFO: namespace e2e-tests-emptydir-2ws4p deletion completed in 6.126059883s • [SLOW TEST:10.285 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 10:50:39.255: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with secret that has name projected-secret-test-6c98794f-8f87-11ea-b5fe-0242ac110017 STEP: Creating a pod to test consume secrets May 6 10:50:39.380: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-6c9bc9b7-8f87-11ea-b5fe-0242ac110017" in namespace "e2e-tests-projected-4sh99" to be "success or failure" May 6 10:50:39.407: INFO: Pod "pod-projected-secrets-6c9bc9b7-8f87-11ea-b5fe-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 27.106569ms May 6 10:50:41.411: INFO: Pod "pod-projected-secrets-6c9bc9b7-8f87-11ea-b5fe-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031132241s May 6 10:50:43.415: INFO: Pod "pod-projected-secrets-6c9bc9b7-8f87-11ea-b5fe-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.034714412s STEP: Saw pod success May 6 10:50:43.415: INFO: Pod "pod-projected-secrets-6c9bc9b7-8f87-11ea-b5fe-0242ac110017" satisfied condition "success or failure" May 6 10:50:43.417: INFO: Trying to get logs from node hunter-worker pod pod-projected-secrets-6c9bc9b7-8f87-11ea-b5fe-0242ac110017 container projected-secret-volume-test: STEP: delete the pod May 6 10:50:43.481: INFO: Waiting for pod pod-projected-secrets-6c9bc9b7-8f87-11ea-b5fe-0242ac110017 to disappear May 6 10:50:43.544: INFO: Pod pod-projected-secrets-6c9bc9b7-8f87-11ea-b5fe-0242ac110017 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 10:50:43.544: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-4sh99" for this suite. May 6 10:50:49.560: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 10:50:49.608: INFO: namespace: e2e-tests-projected-4sh99, resource: bindings, ignored listing per whitelist May 6 10:50:49.614: INFO: namespace e2e-tests-projected-4sh99 deletion completed in 6.066736277s • [SLOW TEST:10.359 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 10:50:49.615: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 6 10:50:49.696: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version --client' May 6 10:50:49.763: INFO: stderr: "" May 6 10:50:49.763: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"13\", GitVersion:\"v1.13.12\", GitCommit:\"a8b52209ee172232b6db7a6e0ce2adc77458829f\", GitTreeState:\"clean\", BuildDate:\"2020-05-02T15:37:06Z\", GoVersion:\"go1.11.13\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" May 6 10:50:49.765: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-hfdj9' May 6 10:50:50.132: INFO: stderr: "" May 6 10:50:50.132: INFO: stdout: "replicationcontroller/redis-master created\n" May 6 10:50:50.132: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-hfdj9' May 6 10:50:50.484: INFO: stderr: "" May 6 10:50:50.485: INFO: stdout: "service/redis-master created\n" STEP: Waiting for Redis master to start. May 6 10:50:51.488: INFO: Selector matched 1 pods for map[app:redis] May 6 10:50:51.488: INFO: Found 0 / 1 May 6 10:50:52.490: INFO: Selector matched 1 pods for map[app:redis] May 6 10:50:52.490: INFO: Found 0 / 1 May 6 10:50:53.489: INFO: Selector matched 1 pods for map[app:redis] May 6 10:50:53.489: INFO: Found 0 / 1 May 6 10:50:54.490: INFO: Selector matched 1 pods for map[app:redis] May 6 10:50:54.490: INFO: Found 0 / 1 May 6 10:50:55.629: INFO: Selector matched 1 pods for map[app:redis] May 6 10:50:55.629: INFO: Found 0 / 1 May 6 10:50:56.490: INFO: Selector matched 1 pods for map[app:redis] May 6 10:50:56.490: INFO: Found 0 / 1 May 6 10:50:57.521: INFO: Selector matched 1 pods for map[app:redis] May 6 10:50:57.521: INFO: Found 1 / 1 May 6 10:50:57.521: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 May 6 10:50:57.714: INFO: Selector matched 1 pods for map[app:redis] May 6 10:50:57.714: INFO: ForEach: Found 1 pods from the filter. Now looping through them. May 6 10:50:57.714: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe pod redis-master-xg6h4 --namespace=e2e-tests-kubectl-hfdj9' May 6 10:50:57.839: INFO: stderr: "" May 6 10:50:57.839: INFO: stdout: "Name: redis-master-xg6h4\nNamespace: e2e-tests-kubectl-hfdj9\nPriority: 0\nPriorityClassName: \nNode: hunter-worker2/172.17.0.4\nStart Time: Wed, 06 May 2020 10:50:50 +0000\nLabels: app=redis\n role=master\nAnnotations: \nStatus: Running\nIP: 10.244.2.77\nControlled By: ReplicationController/redis-master\nContainers:\n redis-master:\n Container ID: containerd://2da90959a92693b9e30444b26b38c5b921bcf28d1d86f699ea7fee100e896794\n Image: gcr.io/kubernetes-e2e-test-images/redis:1.0\n Image ID: gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830\n Port: 6379/TCP\n Host Port: 0/TCP\n State: Running\n Started: Wed, 06 May 2020 10:50:55 +0000\n Ready: True\n Restart Count: 0\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from default-token-vxkwx (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n default-token-vxkwx:\n Type: Secret (a volume populated by a Secret)\n SecretName: default-token-vxkwx\n Optional: false\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute for 300s\n node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 7s default-scheduler Successfully assigned e2e-tests-kubectl-hfdj9/redis-master-xg6h4 to hunter-worker2\n Normal Pulled 6s kubelet, hunter-worker2 Container image \"gcr.io/kubernetes-e2e-test-images/redis:1.0\" already present on machine\n Normal Created 2s kubelet, hunter-worker2 Created container\n Normal Started 2s kubelet, hunter-worker2 Started container\n" May 6 10:50:57.839: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe rc redis-master --namespace=e2e-tests-kubectl-hfdj9' May 6 10:50:57.954: INFO: stderr: "" May 6 10:50:57.954: INFO: stdout: "Name: redis-master\nNamespace: e2e-tests-kubectl-hfdj9\nSelector: app=redis,role=master\nLabels: app=redis\n role=master\nAnnotations: \nReplicas: 1 current / 1 desired\nPods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: app=redis\n role=master\n Containers:\n redis-master:\n Image: gcr.io/kubernetes-e2e-test-images/redis:1.0\n Port: 6379/TCP\n Host Port: 0/TCP\n Environment: \n Mounts: \n Volumes: \nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 7s replication-controller Created pod: redis-master-xg6h4\n" May 6 10:50:57.954: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe service redis-master --namespace=e2e-tests-kubectl-hfdj9' May 6 10:50:58.059: INFO: stderr: "" May 6 10:50:58.059: INFO: stdout: "Name: redis-master\nNamespace: e2e-tests-kubectl-hfdj9\nLabels: app=redis\n role=master\nAnnotations: \nSelector: app=redis,role=master\nType: ClusterIP\nIP: 10.100.6.205\nPort: 6379/TCP\nTargetPort: redis-server/TCP\nEndpoints: 10.244.2.77:6379\nSession Affinity: None\nEvents: \n" May 6 10:50:58.062: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe node hunter-control-plane' May 6 10:50:58.176: INFO: stderr: "" May 6 10:50:58.176: INFO: stdout: "Name: hunter-control-plane\nRoles: master\nLabels: beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/os=linux\n kubernetes.io/hostname=hunter-control-plane\n node-role.kubernetes.io/master=\nAnnotations: kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock\n node.alpha.kubernetes.io/ttl: 0\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Sun, 15 Mar 2020 18:22:50 +0000\nTaints: node-role.kubernetes.io/master:NoSchedule\nUnschedulable: false\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n MemoryPressure False Wed, 06 May 2020 10:50:49 +0000 Sun, 15 Mar 2020 18:22:49 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Wed, 06 May 2020 10:50:49 +0000 Sun, 15 Mar 2020 18:22:49 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Wed, 06 May 2020 10:50:49 +0000 Sun, 15 Mar 2020 18:22:49 +0000 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Wed, 06 May 2020 10:50:49 +0000 Sun, 15 Mar 2020 18:23:41 +0000 KubeletReady kubelet is posting ready status\nAddresses:\n InternalIP: 172.17.0.2\n Hostname: hunter-control-plane\nCapacity:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759892Ki\n pods: 110\nAllocatable:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759892Ki\n pods: 110\nSystem Info:\n Machine ID: 3c4716968dac483293a23c2100ad64a5\n System UUID: 683417f7-64ca-431d-b8ac-22e73b26255e\n Boot ID: ca2aa731-f890-4956-92a1-ff8c7560d571\n Kernel Version: 4.15.0-88-generic\n OS Image: Ubuntu 19.10\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: containerd://1.3.2\n Kubelet Version: v1.13.12\n Kube-Proxy Version: v1.13.12\nPodCIDR: 10.244.0.0/24\nNon-terminated Pods: (7 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE\n --------- ---- ------------ ---------- --------------- ------------- ---\n kube-system etcd-hunter-control-plane 0 (0%) 0 (0%) 0 (0%) 0 (0%) 51d\n kube-system kindnet-l2xm6 100m (0%) 100m (0%) 50Mi (0%) 50Mi (0%) 51d\n kube-system kube-apiserver-hunter-control-plane 250m (1%) 0 (0%) 0 (0%) 0 (0%) 51d\n kube-system kube-controller-manager-hunter-control-plane 200m (1%) 0 (0%) 0 (0%) 0 (0%) 51d\n kube-system kube-proxy-mmppc 0 (0%) 0 (0%) 0 (0%) 0 (0%) 51d\n kube-system kube-scheduler-hunter-control-plane 100m (0%) 0 (0%) 0 (0%) 0 (0%) 51d\n local-path-storage local-path-provisioner-77cfdd744c-q47vg 0 (0%) 0 (0%) 0 (0%) 0 (0%) 51d\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 650m (4%) 100m (0%)\n memory 50Mi (0%) 50Mi (0%)\n ephemeral-storage 0 (0%) 0 (0%)\nEvents: \n" May 6 10:50:58.176: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe namespace e2e-tests-kubectl-hfdj9' May 6 10:50:58.288: INFO: stderr: "" May 6 10:50:58.288: INFO: stdout: "Name: e2e-tests-kubectl-hfdj9\nLabels: e2e-framework=kubectl\n e2e-run=e1c54bfb-8f86-11ea-b5fe-0242ac110017\nAnnotations: \nStatus: Active\n\nNo resource quota.\n\nNo resource limits.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 10:50:58.288: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-hfdj9" for this suite. May 6 10:51:22.363: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 10:51:22.420: INFO: namespace: e2e-tests-kubectl-hfdj9, resource: bindings, ignored listing per whitelist May 6 10:51:22.440: INFO: namespace e2e-tests-kubectl-hfdj9 deletion completed in 24.14859786s • [SLOW TEST:32.825 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl describe /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 10:51:22.440: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with secret that has name projected-secret-test-map-866d8bcd-8f87-11ea-b5fe-0242ac110017 STEP: Creating a pod to test consume secrets May 6 10:51:22.740: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-866e64c7-8f87-11ea-b5fe-0242ac110017" in namespace "e2e-tests-projected-jrngv" to be "success or failure" May 6 10:51:22.762: INFO: Pod "pod-projected-secrets-866e64c7-8f87-11ea-b5fe-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 21.503803ms May 6 10:51:24.766: INFO: Pod "pod-projected-secrets-866e64c7-8f87-11ea-b5fe-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026268853s May 6 10:51:26.770: INFO: Pod "pod-projected-secrets-866e64c7-8f87-11ea-b5fe-0242ac110017": Phase="Running", Reason="", readiness=true. Elapsed: 4.029974706s May 6 10:51:28.775: INFO: Pod "pod-projected-secrets-866e64c7-8f87-11ea-b5fe-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.034626719s STEP: Saw pod success May 6 10:51:28.775: INFO: Pod "pod-projected-secrets-866e64c7-8f87-11ea-b5fe-0242ac110017" satisfied condition "success or failure" May 6 10:51:28.778: INFO: Trying to get logs from node hunter-worker pod pod-projected-secrets-866e64c7-8f87-11ea-b5fe-0242ac110017 container projected-secret-volume-test: STEP: delete the pod May 6 10:51:29.039: INFO: Waiting for pod pod-projected-secrets-866e64c7-8f87-11ea-b5fe-0242ac110017 to disappear May 6 10:51:29.062: INFO: Pod pod-projected-secrets-866e64c7-8f87-11ea-b5fe-0242ac110017 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 10:51:29.062: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-jrngv" for this suite. May 6 10:51:35.212: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 10:51:35.221: INFO: namespace: e2e-tests-projected-jrngv, resource: bindings, ignored listing per whitelist May 6 10:51:35.309: INFO: namespace e2e-tests-projected-jrngv deletion completed in 6.244044792s • [SLOW TEST:12.869 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 10:51:35.309: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-map-8e059169-8f87-11ea-b5fe-0242ac110017 STEP: Creating a pod to test consume configMaps May 6 10:51:35.454: INFO: Waiting up to 5m0s for pod "pod-configmaps-8e0618cd-8f87-11ea-b5fe-0242ac110017" in namespace "e2e-tests-configmap-nsrln" to be "success or failure" May 6 10:51:35.463: INFO: Pod "pod-configmaps-8e0618cd-8f87-11ea-b5fe-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 9.440915ms May 6 10:51:37.465: INFO: Pod "pod-configmaps-8e0618cd-8f87-11ea-b5fe-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011691808s May 6 10:51:39.503: INFO: Pod "pod-configmaps-8e0618cd-8f87-11ea-b5fe-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.049724165s STEP: Saw pod success May 6 10:51:39.503: INFO: Pod "pod-configmaps-8e0618cd-8f87-11ea-b5fe-0242ac110017" satisfied condition "success or failure" May 6 10:51:39.507: INFO: Trying to get logs from node hunter-worker2 pod pod-configmaps-8e0618cd-8f87-11ea-b5fe-0242ac110017 container configmap-volume-test: STEP: delete the pod May 6 10:51:39.530: INFO: Waiting for pod pod-configmaps-8e0618cd-8f87-11ea-b5fe-0242ac110017 to disappear May 6 10:51:39.554: INFO: Pod pod-configmaps-8e0618cd-8f87-11ea-b5fe-0242ac110017 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 10:51:39.554: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-nsrln" for this suite. May 6 10:51:45.574: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 10:51:45.634: INFO: namespace: e2e-tests-configmap-nsrln, resource: bindings, ignored listing per whitelist May 6 10:51:45.654: INFO: namespace e2e-tests-configmap-nsrln deletion completed in 6.095810455s • [SLOW TEST:10.345 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 10:51:45.654: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0644 on node default medium May 6 10:51:45.748: INFO: Waiting up to 5m0s for pod "pod-942924c4-8f87-11ea-b5fe-0242ac110017" in namespace "e2e-tests-emptydir-d4xzs" to be "success or failure" May 6 10:51:45.751: INFO: Pod "pod-942924c4-8f87-11ea-b5fe-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 3.013621ms May 6 10:51:47.769: INFO: Pod "pod-942924c4-8f87-11ea-b5fe-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021148332s May 6 10:51:49.773: INFO: Pod "pod-942924c4-8f87-11ea-b5fe-0242ac110017": Phase="Running", Reason="", readiness=true. Elapsed: 4.025233184s May 6 10:51:51.777: INFO: Pod "pod-942924c4-8f87-11ea-b5fe-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.028857934s STEP: Saw pod success May 6 10:51:51.777: INFO: Pod "pod-942924c4-8f87-11ea-b5fe-0242ac110017" satisfied condition "success or failure" May 6 10:51:51.780: INFO: Trying to get logs from node hunter-worker pod pod-942924c4-8f87-11ea-b5fe-0242ac110017 container test-container: STEP: delete the pod May 6 10:51:51.799: INFO: Waiting for pod pod-942924c4-8f87-11ea-b5fe-0242ac110017 to disappear May 6 10:51:51.804: INFO: Pod pod-942924c4-8f87-11ea-b5fe-0242ac110017 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 10:51:51.804: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-d4xzs" for this suite. May 6 10:51:57.819: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 10:51:57.874: INFO: namespace: e2e-tests-emptydir-d4xzs, resource: bindings, ignored listing per whitelist May 6 10:51:57.906: INFO: namespace e2e-tests-emptydir-d4xzs deletion completed in 6.098794393s • [SLOW TEST:12.252 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 10:51:57.906: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-projected-all-test-volume-9b7a5b58-8f87-11ea-b5fe-0242ac110017 STEP: Creating secret with name secret-projected-all-test-volume-9b7a5b2a-8f87-11ea-b5fe-0242ac110017 STEP: Creating a pod to test Check all projections for projected volume plugin May 6 10:51:58.028: INFO: Waiting up to 5m0s for pod "projected-volume-9b7a5aad-8f87-11ea-b5fe-0242ac110017" in namespace "e2e-tests-projected-z645x" to be "success or failure" May 6 10:51:58.032: INFO: Pod "projected-volume-9b7a5aad-8f87-11ea-b5fe-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 4.068291ms May 6 10:52:00.170: INFO: Pod "projected-volume-9b7a5aad-8f87-11ea-b5fe-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.141784288s May 6 10:52:02.174: INFO: Pod "projected-volume-9b7a5aad-8f87-11ea-b5fe-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.145828849s STEP: Saw pod success May 6 10:52:02.174: INFO: Pod "projected-volume-9b7a5aad-8f87-11ea-b5fe-0242ac110017" satisfied condition "success or failure" May 6 10:52:02.176: INFO: Trying to get logs from node hunter-worker2 pod projected-volume-9b7a5aad-8f87-11ea-b5fe-0242ac110017 container projected-all-volume-test: STEP: delete the pod May 6 10:52:02.211: INFO: Waiting for pod projected-volume-9b7a5aad-8f87-11ea-b5fe-0242ac110017 to disappear May 6 10:52:02.234: INFO: Pod projected-volume-9b7a5aad-8f87-11ea-b5fe-0242ac110017 no longer exists [AfterEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 10:52:02.234: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-z645x" for this suite. May 6 10:52:08.252: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 10:52:08.321: INFO: namespace: e2e-tests-projected-z645x, resource: bindings, ignored listing per whitelist May 6 10:52:08.333: INFO: namespace e2e-tests-projected-z645x deletion completed in 6.095766552s • [SLOW TEST:10.427 seconds] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_combined.go:31 should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 10:52:08.334: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a service in the namespace STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there is no service in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 10:52:14.651: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-namespaces-7zdt8" for this suite. May 6 10:52:20.677: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 10:52:20.689: INFO: namespace: e2e-tests-namespaces-7zdt8, resource: bindings, ignored listing per whitelist May 6 10:52:20.753: INFO: namespace e2e-tests-namespaces-7zdt8 deletion completed in 6.098913319s STEP: Destroying namespace "e2e-tests-nsdeletetest-r9lrd" for this suite. May 6 10:52:20.756: INFO: Namespace e2e-tests-nsdeletetest-r9lrd was already deleted STEP: Destroying namespace "e2e-tests-nsdeletetest-s44qd" for this suite. May 6 10:52:26.772: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 10:52:26.794: INFO: namespace: e2e-tests-nsdeletetest-s44qd, resource: bindings, ignored listing per whitelist May 6 10:52:26.843: INFO: namespace e2e-tests-nsdeletetest-s44qd deletion completed in 6.087191344s • [SLOW TEST:18.509 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 10:52:26.843: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0777 on tmpfs May 6 10:52:26.970: INFO: Waiting up to 5m0s for pod "pod-acbb6460-8f87-11ea-b5fe-0242ac110017" in namespace "e2e-tests-emptydir-gzqhz" to be "success or failure" May 6 10:52:26.973: INFO: Pod "pod-acbb6460-8f87-11ea-b5fe-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.821711ms May 6 10:52:29.019: INFO: Pod "pod-acbb6460-8f87-11ea-b5fe-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.049147579s May 6 10:52:31.022: INFO: Pod "pod-acbb6460-8f87-11ea-b5fe-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.052289802s STEP: Saw pod success May 6 10:52:31.022: INFO: Pod "pod-acbb6460-8f87-11ea-b5fe-0242ac110017" satisfied condition "success or failure" May 6 10:52:31.025: INFO: Trying to get logs from node hunter-worker pod pod-acbb6460-8f87-11ea-b5fe-0242ac110017 container test-container: STEP: delete the pod May 6 10:52:31.076: INFO: Waiting for pod pod-acbb6460-8f87-11ea-b5fe-0242ac110017 to disappear May 6 10:52:31.111: INFO: Pod pod-acbb6460-8f87-11ea-b5fe-0242ac110017 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 10:52:31.111: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-gzqhz" for this suite. May 6 10:52:37.184: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 10:52:37.234: INFO: namespace: e2e-tests-emptydir-gzqhz, resource: bindings, ignored listing per whitelist May 6 10:52:37.255: INFO: namespace e2e-tests-emptydir-gzqhz deletion completed in 6.140512058s • [SLOW TEST:10.411 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 10:52:37.255: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-b2ef916c-8f87-11ea-b5fe-0242ac110017 STEP: Creating a pod to test consume configMaps May 6 10:52:37.398: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-b2f176d0-8f87-11ea-b5fe-0242ac110017" in namespace "e2e-tests-projected-p42nd" to be "success or failure" May 6 10:52:37.404: INFO: Pod "pod-projected-configmaps-b2f176d0-8f87-11ea-b5fe-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 5.844996ms May 6 10:52:39.409: INFO: Pod "pod-projected-configmaps-b2f176d0-8f87-11ea-b5fe-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011272367s May 6 10:52:41.528: INFO: Pod "pod-projected-configmaps-b2f176d0-8f87-11ea-b5fe-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.130286621s STEP: Saw pod success May 6 10:52:41.528: INFO: Pod "pod-projected-configmaps-b2f176d0-8f87-11ea-b5fe-0242ac110017" satisfied condition "success or failure" May 6 10:52:41.532: INFO: Trying to get logs from node hunter-worker pod pod-projected-configmaps-b2f176d0-8f87-11ea-b5fe-0242ac110017 container projected-configmap-volume-test: STEP: delete the pod May 6 10:52:41.711: INFO: Waiting for pod pod-projected-configmaps-b2f176d0-8f87-11ea-b5fe-0242ac110017 to disappear May 6 10:52:41.917: INFO: Pod pod-projected-configmaps-b2f176d0-8f87-11ea-b5fe-0242ac110017 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 10:52:41.917: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-p42nd" for this suite. May 6 10:52:47.984: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 10:52:48.075: INFO: namespace: e2e-tests-projected-p42nd, resource: bindings, ignored listing per whitelist May 6 10:52:48.118: INFO: namespace e2e-tests-projected-p42nd deletion completed in 6.196335012s • [SLOW TEST:10.863 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 10:52:48.118: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward api env vars May 6 10:52:48.257: INFO: Waiting up to 5m0s for pod "downward-api-b96bf7ec-8f87-11ea-b5fe-0242ac110017" in namespace "e2e-tests-downward-api-qrwbt" to be "success or failure" May 6 10:52:48.266: INFO: Pod "downward-api-b96bf7ec-8f87-11ea-b5fe-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 8.974554ms May 6 10:52:51.394: INFO: Pod "downward-api-b96bf7ec-8f87-11ea-b5fe-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 3.136157156s May 6 10:52:53.398: INFO: Pod "downward-api-b96bf7ec-8f87-11ea-b5fe-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 5.141091721s STEP: Saw pod success May 6 10:52:53.399: INFO: Pod "downward-api-b96bf7ec-8f87-11ea-b5fe-0242ac110017" satisfied condition "success or failure" May 6 10:52:53.401: INFO: Trying to get logs from node hunter-worker pod downward-api-b96bf7ec-8f87-11ea-b5fe-0242ac110017 container dapi-container: STEP: delete the pod May 6 10:52:53.429: INFO: Waiting for pod downward-api-b96bf7ec-8f87-11ea-b5fe-0242ac110017 to disappear May 6 10:52:53.440: INFO: Pod downward-api-b96bf7ec-8f87-11ea-b5fe-0242ac110017 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 10:52:53.440: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-qrwbt" for this suite. May 6 10:52:59.479: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 10:52:59.570: INFO: namespace: e2e-tests-downward-api-qrwbt, resource: bindings, ignored listing per whitelist May 6 10:52:59.582: INFO: namespace e2e-tests-downward-api-qrwbt deletion completed in 6.118985243s • [SLOW TEST:11.464 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38 should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 10:52:59.582: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 6 10:52:59.789: INFO: (0) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 63.33186ms) May 6 10:52:59.793: INFO: (1) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 4.326523ms) May 6 10:52:59.797: INFO: (2) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 3.687071ms) May 6 10:52:59.800: INFO: (3) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 3.145074ms) May 6 10:52:59.804: INFO: (4) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 3.31457ms) May 6 10:52:59.807: INFO: (5) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 3.06818ms) May 6 10:52:59.811: INFO: (6) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 3.649434ms) May 6 10:52:59.814: INFO: (7) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 3.299583ms) May 6 10:52:59.817: INFO: (8) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 3.438102ms) May 6 10:52:59.820: INFO: (9) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 2.930469ms) May 6 10:52:59.824: INFO: (10) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 3.167335ms) May 6 10:52:59.827: INFO: (11) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 3.174324ms) May 6 10:52:59.830: INFO: (12) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 3.088539ms) May 6 10:52:59.833: INFO: (13) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 3.281233ms) May 6 10:52:59.836: INFO: (14) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 3.086451ms) May 6 10:52:59.840: INFO: (15) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 3.436492ms) May 6 10:52:59.843: INFO: (16) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 3.593146ms) May 6 10:52:59.847: INFO: (17) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 3.51465ms) May 6 10:52:59.851: INFO: (18) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 3.769634ms) May 6 10:52:59.854: INFO: (19) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 2.993539ms) [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 10:52:59.854: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-proxy-9x8fk" for this suite. May 6 10:53:05.875: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 10:53:05.918: INFO: namespace: e2e-tests-proxy-9x8fk, resource: bindings, ignored listing per whitelist May 6 10:53:05.946: INFO: namespace e2e-tests-proxy-9x8fk deletion completed in 6.089083419s • [SLOW TEST:6.364 seconds] [sig-network] Proxy /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:56 should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 10:53:05.947: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: getting the auto-created API token May 6 10:53:06.608: INFO: created pod pod-service-account-defaultsa May 6 10:53:06.608: INFO: pod pod-service-account-defaultsa service account token volume mount: true May 6 10:53:06.615: INFO: created pod pod-service-account-mountsa May 6 10:53:06.615: INFO: pod pod-service-account-mountsa service account token volume mount: true May 6 10:53:06.635: INFO: created pod pod-service-account-nomountsa May 6 10:53:06.635: INFO: pod pod-service-account-nomountsa service account token volume mount: false May 6 10:53:06.650: INFO: created pod pod-service-account-defaultsa-mountspec May 6 10:53:06.650: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true May 6 10:53:06.696: INFO: created pod pod-service-account-mountsa-mountspec May 6 10:53:06.697: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true May 6 10:53:06.716: INFO: created pod pod-service-account-nomountsa-mountspec May 6 10:53:06.716: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true May 6 10:53:06.741: INFO: created pod pod-service-account-defaultsa-nomountspec May 6 10:53:06.742: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false May 6 10:53:06.772: INFO: created pod pod-service-account-mountsa-nomountspec May 6 10:53:06.772: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false May 6 10:53:06.788: INFO: created pod pod-service-account-nomountsa-nomountspec May 6 10:53:06.788: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 10:53:06.788: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-svcaccounts-mdm6m" for this suite. May 6 10:53:36.900: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 10:53:36.926: INFO: namespace: e2e-tests-svcaccounts-mdm6m, resource: bindings, ignored listing per whitelist May 6 10:53:36.976: INFO: namespace e2e-tests-svcaccounts-mdm6m deletion completed in 30.11604759s • [SLOW TEST:31.029 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:22 should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 10:53:36.976: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward api env vars May 6 10:53:37.108: INFO: Waiting up to 5m0s for pod "downward-api-d68b3de4-8f87-11ea-b5fe-0242ac110017" in namespace "e2e-tests-downward-api-q7t8b" to be "success or failure" May 6 10:53:37.141: INFO: Pod "downward-api-d68b3de4-8f87-11ea-b5fe-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 32.505645ms May 6 10:53:39.223: INFO: Pod "downward-api-d68b3de4-8f87-11ea-b5fe-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.114826891s May 6 10:53:41.226: INFO: Pod "downward-api-d68b3de4-8f87-11ea-b5fe-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.118030022s STEP: Saw pod success May 6 10:53:41.226: INFO: Pod "downward-api-d68b3de4-8f87-11ea-b5fe-0242ac110017" satisfied condition "success or failure" May 6 10:53:41.229: INFO: Trying to get logs from node hunter-worker2 pod downward-api-d68b3de4-8f87-11ea-b5fe-0242ac110017 container dapi-container: STEP: delete the pod May 6 10:53:41.413: INFO: Waiting for pod downward-api-d68b3de4-8f87-11ea-b5fe-0242ac110017 to disappear May 6 10:53:41.435: INFO: Pod downward-api-d68b3de4-8f87-11ea-b5fe-0242ac110017 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 10:53:41.435: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-q7t8b" for this suite. May 6 10:53:47.462: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 10:53:47.492: INFO: namespace: e2e-tests-downward-api-q7t8b, resource: bindings, ignored listing per whitelist May 6 10:53:47.532: INFO: namespace e2e-tests-downward-api-q7t8b deletion completed in 6.092027971s • [SLOW TEST:10.556 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38 should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 10:53:47.532: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: validating cluster-info May 6 10:53:47.618: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config cluster-info' May 6 10:53:47.720: INFO: stderr: "" May 6 10:53:47.720: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32768\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32768/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 10:53:47.720: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-bcw8h" for this suite. May 6 10:53:53.738: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 10:53:53.808: INFO: namespace: e2e-tests-kubectl-bcw8h, resource: bindings, ignored listing per whitelist May 6 10:53:53.824: INFO: namespace e2e-tests-kubectl-bcw8h deletion completed in 6.100468119s • [SLOW TEST:6.292 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl cluster-info /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 10:53:53.825: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0777 on node default medium May 6 10:53:53.951: INFO: Waiting up to 5m0s for pod "pod-e0925020-8f87-11ea-b5fe-0242ac110017" in namespace "e2e-tests-emptydir-tccv5" to be "success or failure" May 6 10:53:53.957: INFO: Pod "pod-e0925020-8f87-11ea-b5fe-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 5.552856ms May 6 10:53:56.116: INFO: Pod "pod-e0925020-8f87-11ea-b5fe-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.165326346s May 6 10:53:58.120: INFO: Pod "pod-e0925020-8f87-11ea-b5fe-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.169024156s STEP: Saw pod success May 6 10:53:58.120: INFO: Pod "pod-e0925020-8f87-11ea-b5fe-0242ac110017" satisfied condition "success or failure" May 6 10:53:58.123: INFO: Trying to get logs from node hunter-worker2 pod pod-e0925020-8f87-11ea-b5fe-0242ac110017 container test-container: STEP: delete the pod May 6 10:53:58.159: INFO: Waiting for pod pod-e0925020-8f87-11ea-b5fe-0242ac110017 to disappear May 6 10:53:58.166: INFO: Pod pod-e0925020-8f87-11ea-b5fe-0242ac110017 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 10:53:58.167: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-tccv5" for this suite. May 6 10:54:04.199: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 10:54:04.241: INFO: namespace: e2e-tests-emptydir-tccv5, resource: bindings, ignored listing per whitelist May 6 10:54:04.275: INFO: namespace e2e-tests-emptydir-tccv5 deletion completed in 6.105475203s • [SLOW TEST:10.450 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 10:54:04.275: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74 STEP: Creating service test in namespace e2e-tests-statefulset-pmp8d [It] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Looking for a node to schedule stateful set and pod STEP: Creating pod with conflicting port in namespace e2e-tests-statefulset-pmp8d STEP: Creating statefulset with conflicting port in namespace e2e-tests-statefulset-pmp8d STEP: Waiting until pod test-pod will start running in namespace e2e-tests-statefulset-pmp8d STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace e2e-tests-statefulset-pmp8d May 6 10:54:08.441: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-pmp8d, name: ss-0, uid: e805393c-8f87-11ea-99e8-0242ac110002, status phase: Pending. Waiting for statefulset controller to delete. May 6 10:54:11.246: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-pmp8d, name: ss-0, uid: e805393c-8f87-11ea-99e8-0242ac110002, status phase: Failed. Waiting for statefulset controller to delete. May 6 10:54:11.252: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-pmp8d, name: ss-0, uid: e805393c-8f87-11ea-99e8-0242ac110002, status phase: Failed. Waiting for statefulset controller to delete. May 6 10:54:11.276: INFO: Observed delete event for stateful pod ss-0 in namespace e2e-tests-statefulset-pmp8d STEP: Removing pod with conflicting port in namespace e2e-tests-statefulset-pmp8d STEP: Waiting when stateful pod ss-0 will be recreated in namespace e2e-tests-statefulset-pmp8d and will be in running state [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85 May 6 10:54:25.430: INFO: Deleting all statefulset in ns e2e-tests-statefulset-pmp8d May 6 10:54:25.434: INFO: Scaling statefulset ss to 0 May 6 10:54:35.474: INFO: Waiting for statefulset status.replicas updated to 0 May 6 10:54:35.477: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 10:54:35.505: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-statefulset-pmp8d" for this suite. May 6 10:54:41.559: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 10:54:41.622: INFO: namespace: e2e-tests-statefulset-pmp8d, resource: bindings, ignored listing per whitelist May 6 10:54:41.638: INFO: namespace e2e-tests-statefulset-pmp8d deletion completed in 6.129727381s • [SLOW TEST:37.363 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 10:54:41.639: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-fd1839e3-8f87-11ea-b5fe-0242ac110017 STEP: Creating a pod to test consume secrets May 6 10:54:41.792: INFO: Waiting up to 5m0s for pod "pod-secrets-fd18cc97-8f87-11ea-b5fe-0242ac110017" in namespace "e2e-tests-secrets-n7jr6" to be "success or failure" May 6 10:54:41.809: INFO: Pod "pod-secrets-fd18cc97-8f87-11ea-b5fe-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 17.388328ms May 6 10:54:43.813: INFO: Pod "pod-secrets-fd18cc97-8f87-11ea-b5fe-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021332227s May 6 10:54:45.817: INFO: Pod "pod-secrets-fd18cc97-8f87-11ea-b5fe-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.025450473s STEP: Saw pod success May 6 10:54:45.817: INFO: Pod "pod-secrets-fd18cc97-8f87-11ea-b5fe-0242ac110017" satisfied condition "success or failure" May 6 10:54:45.820: INFO: Trying to get logs from node hunter-worker2 pod pod-secrets-fd18cc97-8f87-11ea-b5fe-0242ac110017 container secret-volume-test: STEP: delete the pod May 6 10:54:45.859: INFO: Waiting for pod pod-secrets-fd18cc97-8f87-11ea-b5fe-0242ac110017 to disappear May 6 10:54:45.873: INFO: Pod pod-secrets-fd18cc97-8f87-11ea-b5fe-0242ac110017 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 10:54:45.874: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-n7jr6" for this suite. May 6 10:54:51.896: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 10:54:51.909: INFO: namespace: e2e-tests-secrets-n7jr6, resource: bindings, ignored listing per whitelist May 6 10:54:51.972: INFO: namespace e2e-tests-secrets-n7jr6 deletion completed in 6.095259152s • [SLOW TEST:10.334 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 10:54:51.973: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with projected pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod pod-subpath-test-projected-q6fw STEP: Creating a pod to test atomic-volume-subpath May 6 10:54:52.098: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-q6fw" in namespace "e2e-tests-subpath-bdhs4" to be "success or failure" May 6 10:54:52.108: INFO: Pod "pod-subpath-test-projected-q6fw": Phase="Pending", Reason="", readiness=false. Elapsed: 9.969129ms May 6 10:54:54.112: INFO: Pod "pod-subpath-test-projected-q6fw": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013806015s May 6 10:54:56.116: INFO: Pod "pod-subpath-test-projected-q6fw": Phase="Pending", Reason="", readiness=false. Elapsed: 4.018283535s May 6 10:54:58.120: INFO: Pod "pod-subpath-test-projected-q6fw": Phase="Running", Reason="", readiness=false. Elapsed: 6.022541208s May 6 10:55:00.125: INFO: Pod "pod-subpath-test-projected-q6fw": Phase="Running", Reason="", readiness=false. Elapsed: 8.026691133s May 6 10:55:02.127: INFO: Pod "pod-subpath-test-projected-q6fw": Phase="Running", Reason="", readiness=false. Elapsed: 10.029636839s May 6 10:55:04.131: INFO: Pod "pod-subpath-test-projected-q6fw": Phase="Running", Reason="", readiness=false. Elapsed: 12.033464158s May 6 10:55:06.136: INFO: Pod "pod-subpath-test-projected-q6fw": Phase="Running", Reason="", readiness=false. Elapsed: 14.037681205s May 6 10:55:08.140: INFO: Pod "pod-subpath-test-projected-q6fw": Phase="Running", Reason="", readiness=false. Elapsed: 16.042349366s May 6 10:55:10.145: INFO: Pod "pod-subpath-test-projected-q6fw": Phase="Running", Reason="", readiness=false. Elapsed: 18.046993384s May 6 10:55:12.149: INFO: Pod "pod-subpath-test-projected-q6fw": Phase="Running", Reason="", readiness=false. Elapsed: 20.051265024s May 6 10:55:14.153: INFO: Pod "pod-subpath-test-projected-q6fw": Phase="Running", Reason="", readiness=false. Elapsed: 22.055570261s May 6 10:55:16.345: INFO: Pod "pod-subpath-test-projected-q6fw": Phase="Running", Reason="", readiness=false. Elapsed: 24.247391768s May 6 10:55:18.349: INFO: Pod "pod-subpath-test-projected-q6fw": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.25074s STEP: Saw pod success May 6 10:55:18.349: INFO: Pod "pod-subpath-test-projected-q6fw" satisfied condition "success or failure" May 6 10:55:18.351: INFO: Trying to get logs from node hunter-worker pod pod-subpath-test-projected-q6fw container test-container-subpath-projected-q6fw: STEP: delete the pod May 6 10:55:18.390: INFO: Waiting for pod pod-subpath-test-projected-q6fw to disappear May 6 10:55:18.401: INFO: Pod pod-subpath-test-projected-q6fw no longer exists STEP: Deleting pod pod-subpath-test-projected-q6fw May 6 10:55:18.401: INFO: Deleting pod "pod-subpath-test-projected-q6fw" in namespace "e2e-tests-subpath-bdhs4" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 10:55:18.403: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-subpath-bdhs4" for this suite. May 6 10:55:24.429: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 10:55:24.499: INFO: namespace: e2e-tests-subpath-bdhs4, resource: bindings, ignored listing per whitelist May 6 10:55:24.499: INFO: namespace e2e-tests-subpath-bdhs4 deletion completed in 6.093207264s • [SLOW TEST:32.527 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with projected pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 10:55:24.500: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74 STEP: Creating service test in namespace e2e-tests-statefulset-wjlm8 [It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Initializing watcher for selector baz=blah,foo=bar STEP: Creating stateful set ss in namespace e2e-tests-statefulset-wjlm8 STEP: Waiting until all stateful set ss replicas will be running in namespace e2e-tests-statefulset-wjlm8 May 6 10:55:24.599: INFO: Found 0 stateful pods, waiting for 1 May 6 10:55:34.604: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod May 6 10:55:34.608: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-wjlm8 ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' May 6 10:55:34.998: INFO: stderr: "I0506 10:55:34.731637 297 log.go:172] (0xc0007ae0b0) (0xc0005ce000) Create stream\nI0506 10:55:34.731723 297 log.go:172] (0xc0007ae0b0) (0xc0005ce000) Stream added, broadcasting: 1\nI0506 10:55:34.734808 297 log.go:172] (0xc0007ae0b0) Reply frame received for 1\nI0506 10:55:34.734842 297 log.go:172] (0xc0007ae0b0) (0xc000298d20) Create stream\nI0506 10:55:34.734850 297 log.go:172] (0xc0007ae0b0) (0xc000298d20) Stream added, broadcasting: 3\nI0506 10:55:34.735593 297 log.go:172] (0xc0007ae0b0) Reply frame received for 3\nI0506 10:55:34.735611 297 log.go:172] (0xc0007ae0b0) (0xc0005ce140) Create stream\nI0506 10:55:34.735619 297 log.go:172] (0xc0007ae0b0) (0xc0005ce140) Stream added, broadcasting: 5\nI0506 10:55:34.736421 297 log.go:172] (0xc0007ae0b0) Reply frame received for 5\nI0506 10:55:34.990774 297 log.go:172] (0xc0007ae0b0) Data frame received for 3\nI0506 10:55:34.990828 297 log.go:172] (0xc000298d20) (3) Data frame handling\nI0506 10:55:34.990846 297 log.go:172] (0xc000298d20) (3) Data frame sent\nI0506 10:55:34.990858 297 log.go:172] (0xc0007ae0b0) Data frame received for 3\nI0506 10:55:34.990868 297 log.go:172] (0xc000298d20) (3) Data frame handling\nI0506 10:55:34.990922 297 log.go:172] (0xc0007ae0b0) Data frame received for 5\nI0506 10:55:34.990961 297 log.go:172] (0xc0005ce140) (5) Data frame handling\nI0506 10:55:34.993014 297 log.go:172] (0xc0007ae0b0) Data frame received for 1\nI0506 10:55:34.993054 297 log.go:172] (0xc0005ce000) (1) Data frame handling\nI0506 10:55:34.993078 297 log.go:172] (0xc0005ce000) (1) Data frame sent\nI0506 10:55:34.993323 297 log.go:172] (0xc0007ae0b0) (0xc0005ce000) Stream removed, broadcasting: 1\nI0506 10:55:34.993408 297 log.go:172] (0xc0007ae0b0) Go away received\nI0506 10:55:34.993657 297 log.go:172] (0xc0007ae0b0) (0xc0005ce000) Stream removed, broadcasting: 1\nI0506 10:55:34.993690 297 log.go:172] (0xc0007ae0b0) (0xc000298d20) Stream removed, broadcasting: 3\nI0506 10:55:34.993712 297 log.go:172] (0xc0007ae0b0) (0xc0005ce140) Stream removed, broadcasting: 5\n" May 6 10:55:34.998: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" May 6 10:55:34.998: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' May 6 10:55:35.002: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true May 6 10:55:45.400: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false May 6 10:55:45.400: INFO: Waiting for statefulset status.replicas updated to 0 May 6 10:55:45.432: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999716s May 6 10:55:46.507: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.978577062s May 6 10:55:47.510: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.903451039s May 6 10:55:48.515: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.899913364s May 6 10:55:49.520: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.894963228s May 6 10:55:50.525: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.890307281s May 6 10:55:51.529: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.88511578s May 6 10:55:52.534: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.880838733s May 6 10:55:53.537: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.876748783s May 6 10:55:54.542: INFO: Verifying statefulset ss doesn't scale past 1 for another 872.725458ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace e2e-tests-statefulset-wjlm8 May 6 10:55:55.547: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-wjlm8 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 6 10:55:55.762: INFO: stderr: "I0506 10:55:55.683998 318 log.go:172] (0xc00071a370) (0xc000760640) Create stream\nI0506 10:55:55.684083 318 log.go:172] (0xc00071a370) (0xc000760640) Stream added, broadcasting: 1\nI0506 10:55:55.687577 318 log.go:172] (0xc00071a370) Reply frame received for 1\nI0506 10:55:55.687614 318 log.go:172] (0xc00071a370) (0xc00065ae60) Create stream\nI0506 10:55:55.687623 318 log.go:172] (0xc00071a370) (0xc00065ae60) Stream added, broadcasting: 3\nI0506 10:55:55.688434 318 log.go:172] (0xc00071a370) Reply frame received for 3\nI0506 10:55:55.688452 318 log.go:172] (0xc00071a370) (0xc0007606e0) Create stream\nI0506 10:55:55.688464 318 log.go:172] (0xc00071a370) (0xc0007606e0) Stream added, broadcasting: 5\nI0506 10:55:55.689529 318 log.go:172] (0xc00071a370) Reply frame received for 5\nI0506 10:55:55.755286 318 log.go:172] (0xc00071a370) Data frame received for 5\nI0506 10:55:55.755346 318 log.go:172] (0xc0007606e0) (5) Data frame handling\nI0506 10:55:55.755378 318 log.go:172] (0xc00071a370) Data frame received for 3\nI0506 10:55:55.755390 318 log.go:172] (0xc00065ae60) (3) Data frame handling\nI0506 10:55:55.755402 318 log.go:172] (0xc00065ae60) (3) Data frame sent\nI0506 10:55:55.755413 318 log.go:172] (0xc00071a370) Data frame received for 3\nI0506 10:55:55.755423 318 log.go:172] (0xc00065ae60) (3) Data frame handling\nI0506 10:55:55.756806 318 log.go:172] (0xc00071a370) Data frame received for 1\nI0506 10:55:55.756830 318 log.go:172] (0xc000760640) (1) Data frame handling\nI0506 10:55:55.756864 318 log.go:172] (0xc000760640) (1) Data frame sent\nI0506 10:55:55.756906 318 log.go:172] (0xc00071a370) (0xc000760640) Stream removed, broadcasting: 1\nI0506 10:55:55.757367 318 log.go:172] (0xc00071a370) (0xc000760640) Stream removed, broadcasting: 1\nI0506 10:55:55.757426 318 log.go:172] (0xc00071a370) (0xc00065ae60) Stream removed, broadcasting: 3\nI0506 10:55:55.757459 318 log.go:172] (0xc00071a370) (0xc0007606e0) Stream removed, broadcasting: 5\n" May 6 10:55:55.762: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" May 6 10:55:55.762: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' May 6 10:55:55.766: INFO: Found 1 stateful pods, waiting for 3 May 6 10:56:05.771: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true May 6 10:56:05.771: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true May 6 10:56:05.771: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Verifying that stateful set ss was scaled up in order STEP: Scale down will halt with unhealthy stateful pod May 6 10:56:05.776: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-wjlm8 ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' May 6 10:56:05.975: INFO: stderr: "I0506 10:56:05.907903 341 log.go:172] (0xc000138840) (0xc00076c640) Create stream\nI0506 10:56:05.907970 341 log.go:172] (0xc000138840) (0xc00076c640) Stream added, broadcasting: 1\nI0506 10:56:05.911402 341 log.go:172] (0xc000138840) Reply frame received for 1\nI0506 10:56:05.911462 341 log.go:172] (0xc000138840) (0xc0007ccf00) Create stream\nI0506 10:56:05.911481 341 log.go:172] (0xc000138840) (0xc0007ccf00) Stream added, broadcasting: 3\nI0506 10:56:05.912680 341 log.go:172] (0xc000138840) Reply frame received for 3\nI0506 10:56:05.912716 341 log.go:172] (0xc000138840) (0xc0007cd040) Create stream\nI0506 10:56:05.912731 341 log.go:172] (0xc000138840) (0xc0007cd040) Stream added, broadcasting: 5\nI0506 10:56:05.913918 341 log.go:172] (0xc000138840) Reply frame received for 5\nI0506 10:56:05.970065 341 log.go:172] (0xc000138840) Data frame received for 5\nI0506 10:56:05.970212 341 log.go:172] (0xc0007cd040) (5) Data frame handling\nI0506 10:56:05.970252 341 log.go:172] (0xc000138840) Data frame received for 3\nI0506 10:56:05.970274 341 log.go:172] (0xc0007ccf00) (3) Data frame handling\nI0506 10:56:05.970299 341 log.go:172] (0xc0007ccf00) (3) Data frame sent\nI0506 10:56:05.970321 341 log.go:172] (0xc000138840) Data frame received for 3\nI0506 10:56:05.970338 341 log.go:172] (0xc0007ccf00) (3) Data frame handling\nI0506 10:56:05.971359 341 log.go:172] (0xc000138840) Data frame received for 1\nI0506 10:56:05.971424 341 log.go:172] (0xc00076c640) (1) Data frame handling\nI0506 10:56:05.971448 341 log.go:172] (0xc00076c640) (1) Data frame sent\nI0506 10:56:05.971465 341 log.go:172] (0xc000138840) (0xc00076c640) Stream removed, broadcasting: 1\nI0506 10:56:05.971495 341 log.go:172] (0xc000138840) Go away received\nI0506 10:56:05.971897 341 log.go:172] (0xc000138840) (0xc00076c640) Stream removed, broadcasting: 1\nI0506 10:56:05.971928 341 log.go:172] (0xc000138840) (0xc0007ccf00) Stream removed, broadcasting: 3\nI0506 10:56:05.971959 341 log.go:172] (0xc000138840) (0xc0007cd040) Stream removed, broadcasting: 5\n" May 6 10:56:05.975: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" May 6 10:56:05.975: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' May 6 10:56:05.975: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-wjlm8 ss-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' May 6 10:56:06.196: INFO: stderr: "I0506 10:56:06.093804 363 log.go:172] (0xc000746370) (0xc000417360) Create stream\nI0506 10:56:06.093872 363 log.go:172] (0xc000746370) (0xc000417360) Stream added, broadcasting: 1\nI0506 10:56:06.096882 363 log.go:172] (0xc000746370) Reply frame received for 1\nI0506 10:56:06.096925 363 log.go:172] (0xc000746370) (0xc000512000) Create stream\nI0506 10:56:06.096939 363 log.go:172] (0xc000746370) (0xc000512000) Stream added, broadcasting: 3\nI0506 10:56:06.098266 363 log.go:172] (0xc000746370) Reply frame received for 3\nI0506 10:56:06.098312 363 log.go:172] (0xc000746370) (0xc000496000) Create stream\nI0506 10:56:06.098330 363 log.go:172] (0xc000746370) (0xc000496000) Stream added, broadcasting: 5\nI0506 10:56:06.099250 363 log.go:172] (0xc000746370) Reply frame received for 5\nI0506 10:56:06.189389 363 log.go:172] (0xc000746370) Data frame received for 3\nI0506 10:56:06.189440 363 log.go:172] (0xc000512000) (3) Data frame handling\nI0506 10:56:06.189458 363 log.go:172] (0xc000512000) (3) Data frame sent\nI0506 10:56:06.189470 363 log.go:172] (0xc000746370) Data frame received for 3\nI0506 10:56:06.189521 363 log.go:172] (0xc000746370) Data frame received for 5\nI0506 10:56:06.189592 363 log.go:172] (0xc000496000) (5) Data frame handling\nI0506 10:56:06.189644 363 log.go:172] (0xc000512000) (3) Data frame handling\nI0506 10:56:06.191222 363 log.go:172] (0xc000746370) Data frame received for 1\nI0506 10:56:06.191240 363 log.go:172] (0xc000417360) (1) Data frame handling\nI0506 10:56:06.191252 363 log.go:172] (0xc000417360) (1) Data frame sent\nI0506 10:56:06.191479 363 log.go:172] (0xc000746370) (0xc000417360) Stream removed, broadcasting: 1\nI0506 10:56:06.191752 363 log.go:172] (0xc000746370) (0xc000417360) Stream removed, broadcasting: 1\nI0506 10:56:06.191794 363 log.go:172] (0xc000746370) (0xc000512000) Stream removed, broadcasting: 3\nI0506 10:56:06.191816 363 log.go:172] (0xc000746370) (0xc000496000) Stream removed, broadcasting: 5\n" May 6 10:56:06.196: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" May 6 10:56:06.196: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' May 6 10:56:06.196: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-wjlm8 ss-2 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' May 6 10:56:06.450: INFO: stderr: "I0506 10:56:06.337336 386 log.go:172] (0xc000138840) (0xc0006af2c0) Create stream\nI0506 10:56:06.337402 386 log.go:172] (0xc000138840) (0xc0006af2c0) Stream added, broadcasting: 1\nI0506 10:56:06.339548 386 log.go:172] (0xc000138840) Reply frame received for 1\nI0506 10:56:06.339589 386 log.go:172] (0xc000138840) (0xc0006af360) Create stream\nI0506 10:56:06.339605 386 log.go:172] (0xc000138840) (0xc0006af360) Stream added, broadcasting: 3\nI0506 10:56:06.340307 386 log.go:172] (0xc000138840) Reply frame received for 3\nI0506 10:56:06.340354 386 log.go:172] (0xc000138840) (0xc0006af400) Create stream\nI0506 10:56:06.340374 386 log.go:172] (0xc000138840) (0xc0006af400) Stream added, broadcasting: 5\nI0506 10:56:06.341344 386 log.go:172] (0xc000138840) Reply frame received for 5\nI0506 10:56:06.443601 386 log.go:172] (0xc000138840) Data frame received for 3\nI0506 10:56:06.443651 386 log.go:172] (0xc0006af360) (3) Data frame handling\nI0506 10:56:06.443791 386 log.go:172] (0xc000138840) Data frame received for 5\nI0506 10:56:06.443822 386 log.go:172] (0xc0006af400) (5) Data frame handling\nI0506 10:56:06.443867 386 log.go:172] (0xc0006af360) (3) Data frame sent\nI0506 10:56:06.443892 386 log.go:172] (0xc000138840) Data frame received for 3\nI0506 10:56:06.443915 386 log.go:172] (0xc0006af360) (3) Data frame handling\nI0506 10:56:06.445901 386 log.go:172] (0xc000138840) Data frame received for 1\nI0506 10:56:06.445930 386 log.go:172] (0xc0006af2c0) (1) Data frame handling\nI0506 10:56:06.445940 386 log.go:172] (0xc0006af2c0) (1) Data frame sent\nI0506 10:56:06.445952 386 log.go:172] (0xc000138840) (0xc0006af2c0) Stream removed, broadcasting: 1\nI0506 10:56:06.445985 386 log.go:172] (0xc000138840) Go away received\nI0506 10:56:06.446172 386 log.go:172] (0xc000138840) (0xc0006af2c0) Stream removed, broadcasting: 1\nI0506 10:56:06.446191 386 log.go:172] (0xc000138840) (0xc0006af360) Stream removed, broadcasting: 3\nI0506 10:56:06.446204 386 log.go:172] (0xc000138840) (0xc0006af400) Stream removed, broadcasting: 5\n" May 6 10:56:06.450: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" May 6 10:56:06.450: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' May 6 10:56:06.450: INFO: Waiting for statefulset status.replicas updated to 0 May 6 10:56:06.454: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2 May 6 10:56:16.463: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false May 6 10:56:16.464: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false May 6 10:56:16.464: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false May 6 10:56:16.481: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999519s May 6 10:56:17.485: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.989714418s May 6 10:56:18.491: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.985644579s May 6 10:56:19.496: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.980176696s May 6 10:56:20.502: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.974793814s May 6 10:56:21.507: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.969125388s May 6 10:56:22.513: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.963754651s May 6 10:56:23.517: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.957997968s May 6 10:56:24.522: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.953945906s May 6 10:56:25.528: INFO: Verifying statefulset ss doesn't scale past 3 for another 948.796596ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacee2e-tests-statefulset-wjlm8 May 6 10:56:26.533: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-wjlm8 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 6 10:56:26.761: INFO: stderr: "I0506 10:56:26.674460 407 log.go:172] (0xc0006ea4d0) (0xc0006e3360) Create stream\nI0506 10:56:26.674518 407 log.go:172] (0xc0006ea4d0) (0xc0006e3360) Stream added, broadcasting: 1\nI0506 10:56:26.677387 407 log.go:172] (0xc0006ea4d0) Reply frame received for 1\nI0506 10:56:26.677446 407 log.go:172] (0xc0006ea4d0) (0xc0006e3400) Create stream\nI0506 10:56:26.677461 407 log.go:172] (0xc0006ea4d0) (0xc0006e3400) Stream added, broadcasting: 3\nI0506 10:56:26.678568 407 log.go:172] (0xc0006ea4d0) Reply frame received for 3\nI0506 10:56:26.678616 407 log.go:172] (0xc0006ea4d0) (0xc0006e34a0) Create stream\nI0506 10:56:26.678632 407 log.go:172] (0xc0006ea4d0) (0xc0006e34a0) Stream added, broadcasting: 5\nI0506 10:56:26.679610 407 log.go:172] (0xc0006ea4d0) Reply frame received for 5\nI0506 10:56:26.755942 407 log.go:172] (0xc0006ea4d0) Data frame received for 5\nI0506 10:56:26.755969 407 log.go:172] (0xc0006e34a0) (5) Data frame handling\nI0506 10:56:26.756112 407 log.go:172] (0xc0006ea4d0) Data frame received for 3\nI0506 10:56:26.756150 407 log.go:172] (0xc0006e3400) (3) Data frame handling\nI0506 10:56:26.756183 407 log.go:172] (0xc0006e3400) (3) Data frame sent\nI0506 10:56:26.756196 407 log.go:172] (0xc0006ea4d0) Data frame received for 3\nI0506 10:56:26.756205 407 log.go:172] (0xc0006e3400) (3) Data frame handling\nI0506 10:56:26.757402 407 log.go:172] (0xc0006ea4d0) Data frame received for 1\nI0506 10:56:26.757420 407 log.go:172] (0xc0006e3360) (1) Data frame handling\nI0506 10:56:26.757434 407 log.go:172] (0xc0006e3360) (1) Data frame sent\nI0506 10:56:26.757460 407 log.go:172] (0xc0006ea4d0) (0xc0006e3360) Stream removed, broadcasting: 1\nI0506 10:56:26.757482 407 log.go:172] (0xc0006ea4d0) Go away received\nI0506 10:56:26.757666 407 log.go:172] (0xc0006ea4d0) (0xc0006e3360) Stream removed, broadcasting: 1\nI0506 10:56:26.757678 407 log.go:172] (0xc0006ea4d0) (0xc0006e3400) Stream removed, broadcasting: 3\nI0506 10:56:26.757683 407 log.go:172] (0xc0006ea4d0) (0xc0006e34a0) Stream removed, broadcasting: 5\n" May 6 10:56:26.761: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" May 6 10:56:26.761: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' May 6 10:56:26.762: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-wjlm8 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 6 10:56:26.945: INFO: stderr: "I0506 10:56:26.885640 429 log.go:172] (0xc000714370) (0xc000756640) Create stream\nI0506 10:56:26.885713 429 log.go:172] (0xc000714370) (0xc000756640) Stream added, broadcasting: 1\nI0506 10:56:26.888207 429 log.go:172] (0xc000714370) Reply frame received for 1\nI0506 10:56:26.888263 429 log.go:172] (0xc000714370) (0xc000636dc0) Create stream\nI0506 10:56:26.888289 429 log.go:172] (0xc000714370) (0xc000636dc0) Stream added, broadcasting: 3\nI0506 10:56:26.889482 429 log.go:172] (0xc000714370) Reply frame received for 3\nI0506 10:56:26.889544 429 log.go:172] (0xc000714370) (0xc0007566e0) Create stream\nI0506 10:56:26.889565 429 log.go:172] (0xc000714370) (0xc0007566e0) Stream added, broadcasting: 5\nI0506 10:56:26.890533 429 log.go:172] (0xc000714370) Reply frame received for 5\nI0506 10:56:26.936328 429 log.go:172] (0xc000714370) Data frame received for 5\nI0506 10:56:26.936367 429 log.go:172] (0xc0007566e0) (5) Data frame handling\nI0506 10:56:26.936401 429 log.go:172] (0xc000714370) Data frame received for 3\nI0506 10:56:26.936420 429 log.go:172] (0xc000636dc0) (3) Data frame handling\nI0506 10:56:26.936440 429 log.go:172] (0xc000636dc0) (3) Data frame sent\nI0506 10:56:26.936518 429 log.go:172] (0xc000714370) Data frame received for 3\nI0506 10:56:26.936562 429 log.go:172] (0xc000636dc0) (3) Data frame handling\nI0506 10:56:26.938550 429 log.go:172] (0xc000714370) Data frame received for 1\nI0506 10:56:26.938581 429 log.go:172] (0xc000756640) (1) Data frame handling\nI0506 10:56:26.938601 429 log.go:172] (0xc000756640) (1) Data frame sent\nI0506 10:56:26.938615 429 log.go:172] (0xc000714370) (0xc000756640) Stream removed, broadcasting: 1\nI0506 10:56:26.938633 429 log.go:172] (0xc000714370) Go away received\nI0506 10:56:26.938924 429 log.go:172] (0xc000714370) (0xc000756640) Stream removed, broadcasting: 1\nI0506 10:56:26.938952 429 log.go:172] (0xc000714370) (0xc000636dc0) Stream removed, broadcasting: 3\nI0506 10:56:26.938966 429 log.go:172] (0xc000714370) (0xc0007566e0) Stream removed, broadcasting: 5\n" May 6 10:56:26.945: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" May 6 10:56:26.945: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' May 6 10:56:26.946: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-wjlm8 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 6 10:56:27.161: INFO: stderr: "I0506 10:56:27.086191 453 log.go:172] (0xc0008382c0) (0xc00074a640) Create stream\nI0506 10:56:27.086241 453 log.go:172] (0xc0008382c0) (0xc00074a640) Stream added, broadcasting: 1\nI0506 10:56:27.088350 453 log.go:172] (0xc0008382c0) Reply frame received for 1\nI0506 10:56:27.088393 453 log.go:172] (0xc0008382c0) (0xc000694dc0) Create stream\nI0506 10:56:27.088413 453 log.go:172] (0xc0008382c0) (0xc000694dc0) Stream added, broadcasting: 3\nI0506 10:56:27.089545 453 log.go:172] (0xc0008382c0) Reply frame received for 3\nI0506 10:56:27.089595 453 log.go:172] (0xc0008382c0) (0xc00041c000) Create stream\nI0506 10:56:27.089614 453 log.go:172] (0xc0008382c0) (0xc00041c000) Stream added, broadcasting: 5\nI0506 10:56:27.090541 453 log.go:172] (0xc0008382c0) Reply frame received for 5\nI0506 10:56:27.154267 453 log.go:172] (0xc0008382c0) Data frame received for 3\nI0506 10:56:27.154317 453 log.go:172] (0xc000694dc0) (3) Data frame handling\nI0506 10:56:27.154342 453 log.go:172] (0xc000694dc0) (3) Data frame sent\nI0506 10:56:27.154599 453 log.go:172] (0xc0008382c0) Data frame received for 5\nI0506 10:56:27.154736 453 log.go:172] (0xc00041c000) (5) Data frame handling\nI0506 10:56:27.154916 453 log.go:172] (0xc0008382c0) Data frame received for 3\nI0506 10:56:27.154938 453 log.go:172] (0xc000694dc0) (3) Data frame handling\nI0506 10:56:27.156437 453 log.go:172] (0xc0008382c0) Data frame received for 1\nI0506 10:56:27.156473 453 log.go:172] (0xc00074a640) (1) Data frame handling\nI0506 10:56:27.156498 453 log.go:172] (0xc00074a640) (1) Data frame sent\nI0506 10:56:27.156879 453 log.go:172] (0xc0008382c0) (0xc00074a640) Stream removed, broadcasting: 1\nI0506 10:56:27.156920 453 log.go:172] (0xc0008382c0) Go away received\nI0506 10:56:27.157344 453 log.go:172] (0xc0008382c0) (0xc00074a640) Stream removed, broadcasting: 1\nI0506 10:56:27.157375 453 log.go:172] (0xc0008382c0) (0xc000694dc0) Stream removed, broadcasting: 3\nI0506 10:56:27.157392 453 log.go:172] (0xc0008382c0) (0xc00041c000) Stream removed, broadcasting: 5\n" May 6 10:56:27.161: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" May 6 10:56:27.161: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' May 6 10:56:27.161: INFO: Scaling statefulset ss to 0 STEP: Verifying that stateful set ss was scaled down in reverse order [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85 May 6 10:56:57.180: INFO: Deleting all statefulset in ns e2e-tests-statefulset-wjlm8 May 6 10:56:57.189: INFO: Scaling statefulset ss to 0 May 6 10:56:57.196: INFO: Waiting for statefulset status.replicas updated to 0 May 6 10:56:57.198: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 10:56:57.208: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-statefulset-wjlm8" for this suite. May 6 10:57:03.222: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 10:57:03.282: INFO: namespace: e2e-tests-statefulset-wjlm8, resource: bindings, ignored listing per whitelist May 6 10:57:03.305: INFO: namespace e2e-tests-statefulset-wjlm8 deletion completed in 6.09394096s • [SLOW TEST:98.805 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 10:57:03.305: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0777 on node default medium May 6 10:57:03.419: INFO: Waiting up to 5m0s for pod "pod-5181ce1a-8f88-11ea-b5fe-0242ac110017" in namespace "e2e-tests-emptydir-ssmpk" to be "success or failure" May 6 10:57:03.422: INFO: Pod "pod-5181ce1a-8f88-11ea-b5fe-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 3.250819ms May 6 10:57:05.484: INFO: Pod "pod-5181ce1a-8f88-11ea-b5fe-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.064412532s May 6 10:57:07.487: INFO: Pod "pod-5181ce1a-8f88-11ea-b5fe-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.068053429s STEP: Saw pod success May 6 10:57:07.487: INFO: Pod "pod-5181ce1a-8f88-11ea-b5fe-0242ac110017" satisfied condition "success or failure" May 6 10:57:07.490: INFO: Trying to get logs from node hunter-worker2 pod pod-5181ce1a-8f88-11ea-b5fe-0242ac110017 container test-container: STEP: delete the pod May 6 10:57:07.527: INFO: Waiting for pod pod-5181ce1a-8f88-11ea-b5fe-0242ac110017 to disappear May 6 10:57:07.536: INFO: Pod pod-5181ce1a-8f88-11ea-b5fe-0242ac110017 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 10:57:07.536: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-ssmpk" for this suite. May 6 10:57:13.558: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 10:57:13.629: INFO: namespace: e2e-tests-emptydir-ssmpk, resource: bindings, ignored listing per whitelist May 6 10:57:13.669: INFO: namespace e2e-tests-emptydir-ssmpk deletion completed in 6.12989761s • [SLOW TEST:10.364 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 10:57:13.669: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test use defaults May 6 10:57:13.802: INFO: Waiting up to 5m0s for pod "client-containers-57b2a9f4-8f88-11ea-b5fe-0242ac110017" in namespace "e2e-tests-containers-fpgbr" to be "success or failure" May 6 10:57:13.828: INFO: Pod "client-containers-57b2a9f4-8f88-11ea-b5fe-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 25.674283ms May 6 10:57:15.832: INFO: Pod "client-containers-57b2a9f4-8f88-11ea-b5fe-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029845992s May 6 10:57:17.835: INFO: Pod "client-containers-57b2a9f4-8f88-11ea-b5fe-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.033544282s STEP: Saw pod success May 6 10:57:17.835: INFO: Pod "client-containers-57b2a9f4-8f88-11ea-b5fe-0242ac110017" satisfied condition "success or failure" May 6 10:57:17.838: INFO: Trying to get logs from node hunter-worker pod client-containers-57b2a9f4-8f88-11ea-b5fe-0242ac110017 container test-container: STEP: delete the pod May 6 10:57:17.871: INFO: Waiting for pod client-containers-57b2a9f4-8f88-11ea-b5fe-0242ac110017 to disappear May 6 10:57:17.876: INFO: Pod client-containers-57b2a9f4-8f88-11ea-b5fe-0242ac110017 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 10:57:17.876: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-containers-fpgbr" for this suite. May 6 10:57:23.909: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 10:57:23.932: INFO: namespace: e2e-tests-containers-fpgbr, resource: bindings, ignored listing per whitelist May 6 10:57:23.979: INFO: namespace e2e-tests-containers-fpgbr deletion completed in 6.098671307s • [SLOW TEST:10.310 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 10:57:23.979: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-5dd4e807-8f88-11ea-b5fe-0242ac110017 STEP: Creating a pod to test consume configMaps May 6 10:57:24.106: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-5dd8174c-8f88-11ea-b5fe-0242ac110017" in namespace "e2e-tests-projected-prbzt" to be "success or failure" May 6 10:57:24.143: INFO: Pod "pod-projected-configmaps-5dd8174c-8f88-11ea-b5fe-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 36.593275ms May 6 10:57:26.148: INFO: Pod "pod-projected-configmaps-5dd8174c-8f88-11ea-b5fe-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.041265336s May 6 10:57:28.152: INFO: Pod "pod-projected-configmaps-5dd8174c-8f88-11ea-b5fe-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.045479168s STEP: Saw pod success May 6 10:57:28.152: INFO: Pod "pod-projected-configmaps-5dd8174c-8f88-11ea-b5fe-0242ac110017" satisfied condition "success or failure" May 6 10:57:28.155: INFO: Trying to get logs from node hunter-worker2 pod pod-projected-configmaps-5dd8174c-8f88-11ea-b5fe-0242ac110017 container projected-configmap-volume-test: STEP: delete the pod May 6 10:57:28.172: INFO: Waiting for pod pod-projected-configmaps-5dd8174c-8f88-11ea-b5fe-0242ac110017 to disappear May 6 10:57:28.177: INFO: Pod pod-projected-configmaps-5dd8174c-8f88-11ea-b5fe-0242ac110017 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 10:57:28.177: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-prbzt" for this suite. May 6 10:57:34.194: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 10:57:34.262: INFO: namespace: e2e-tests-projected-prbzt, resource: bindings, ignored listing per whitelist May 6 10:57:34.266: INFO: namespace e2e-tests-projected-prbzt deletion completed in 6.085968124s • [SLOW TEST:10.287 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 10:57:34.266: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default;check="$$(dig +tcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;test -n "$$(getent hosts dns-querier-1.dns-test-service.e2e-tests-dns-jg7xf.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-jg7xf.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-jg7xf.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default;check="$$(dig +tcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;test -n "$$(getent hosts dns-querier-1.dns-test-service.e2e-tests-dns-jg7xf.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-jg7xf.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-jg7xf.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 6 10:57:42.414: INFO: DNS probes using e2e-tests-dns-jg7xf/dns-test-63f10b2e-8f88-11ea-b5fe-0242ac110017 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 10:57:42.449: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-dns-jg7xf" for this suite. May 6 10:57:50.479: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 10:57:50.499: INFO: namespace: e2e-tests-dns-jg7xf, resource: bindings, ignored listing per whitelist May 6 10:57:50.547: INFO: namespace e2e-tests-dns-jg7xf deletion completed in 8.083338921s • [SLOW TEST:16.281 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 10:57:50.548: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the rc1 STEP: create the rc2 STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well STEP: delete the rc simpletest-rc-to-be-deleted STEP: wait for the rc to be deleted STEP: Gathering metrics W0506 10:58:02.209548 7 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 6 10:58:02.209: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 10:58:02.209: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-pbsk4" for this suite. May 6 10:58:10.287: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 10:58:10.346: INFO: namespace: e2e-tests-gc-pbsk4, resource: bindings, ignored listing per whitelist May 6 10:58:10.358: INFO: namespace e2e-tests-gc-pbsk4 deletion completed in 8.146144247s • [SLOW TEST:19.810 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 10:58:10.358: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-jpccd May 6 10:58:14.844: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-jpccd STEP: checking the pod's current state and verifying that restartCount is present May 6 10:58:14.847: INFO: Initial restart count of pod liveness-http is 0 May 6 10:58:30.881: INFO: Restart count of pod e2e-tests-container-probe-jpccd/liveness-http is now 1 (16.033959898s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 10:58:30.893: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-jpccd" for this suite. May 6 10:58:36.909: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 10:58:36.966: INFO: namespace: e2e-tests-container-probe-jpccd, resource: bindings, ignored listing per whitelist May 6 10:58:36.987: INFO: namespace e2e-tests-container-probe-jpccd deletion completed in 6.088057663s • [SLOW TEST:26.630 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl rolling-update should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 10:58:36.988: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1358 [It] should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine May 6 10:58:37.068: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=e2e-tests-kubectl-25ngj' May 6 10:58:39.292: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" May 6 10:58:39.292: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n" STEP: verifying the rc e2e-test-nginx-rc was created May 6 10:58:39.296: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 0 spec.replicas 1 status.replicas 0 May 6 10:58:39.305: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0 STEP: rolling-update to same image controller May 6 10:58:39.392: INFO: scanned /root for discovery docs: May 6 10:58:39.392: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update e2e-test-nginx-rc --update-period=1s --image=docker.io/library/nginx:1.14-alpine --image-pull-policy=IfNotPresent --namespace=e2e-tests-kubectl-25ngj' May 6 10:58:55.450: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" May 6 10:58:55.450: INFO: stdout: "Created e2e-test-nginx-rc-ffd13ac27b5c866eafc24fed3ec6196b\nScaling up e2e-test-nginx-rc-ffd13ac27b5c866eafc24fed3ec6196b from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-ffd13ac27b5c866eafc24fed3ec6196b up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-ffd13ac27b5c866eafc24fed3ec6196b to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n" May 6 10:58:55.450: INFO: stdout: "Created e2e-test-nginx-rc-ffd13ac27b5c866eafc24fed3ec6196b\nScaling up e2e-test-nginx-rc-ffd13ac27b5c866eafc24fed3ec6196b from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-ffd13ac27b5c866eafc24fed3ec6196b up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-ffd13ac27b5c866eafc24fed3ec6196b to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n" STEP: waiting for all containers in run=e2e-test-nginx-rc pods to come up. May 6 10:58:55.450: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=e2e-tests-kubectl-25ngj' May 6 10:58:55.554: INFO: stderr: "" May 6 10:58:55.554: INFO: stdout: "e2e-test-nginx-rc-ffd13ac27b5c866eafc24fed3ec6196b-l9l8v " May 6 10:58:55.554: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-ffd13ac27b5c866eafc24fed3ec6196b-l9l8v -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-nginx-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-25ngj' May 6 10:58:55.660: INFO: stderr: "" May 6 10:58:55.660: INFO: stdout: "true" May 6 10:58:55.660: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-ffd13ac27b5c866eafc24fed3ec6196b-l9l8v -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-nginx-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-25ngj' May 6 10:58:55.776: INFO: stderr: "" May 6 10:58:55.776: INFO: stdout: "docker.io/library/nginx:1.14-alpine" May 6 10:58:55.776: INFO: e2e-test-nginx-rc-ffd13ac27b5c866eafc24fed3ec6196b-l9l8v is verified up and running [AfterEach] [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1364 May 6 10:58:55.776: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=e2e-tests-kubectl-25ngj' May 6 10:58:55.898: INFO: stderr: "" May 6 10:58:55.898: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 10:58:55.898: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-25ngj" for this suite. May 6 10:59:17.973: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 10:59:18.038: INFO: namespace: e2e-tests-kubectl-25ngj, resource: bindings, ignored listing per whitelist May 6 10:59:18.047: INFO: namespace e2e-tests-kubectl-25ngj deletion completed in 22.114268463s • [SLOW TEST:41.060 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 10:59:18.047: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name cm-test-opt-del-a1dde352-8f88-11ea-b5fe-0242ac110017 STEP: Creating configMap with name cm-test-opt-upd-a1dde39e-8f88-11ea-b5fe-0242ac110017 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-a1dde352-8f88-11ea-b5fe-0242ac110017 STEP: Updating configmap cm-test-opt-upd-a1dde39e-8f88-11ea-b5fe-0242ac110017 STEP: Creating configMap with name cm-test-opt-create-a1dde3b5-8f88-11ea-b5fe-0242ac110017 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 10:59:28.410: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-js55z" for this suite. May 6 10:59:50.445: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 10:59:50.552: INFO: namespace: e2e-tests-configmap-js55z, resource: bindings, ignored listing per whitelist May 6 10:59:50.573: INFO: namespace e2e-tests-configmap-js55z deletion completed in 22.160075069s • [SLOW TEST:32.526 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 10:59:50.573: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-lghth STEP: creating a selector STEP: Creating the service pods in kubernetes May 6 10:59:50.655: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods May 6 11:00:20.911: INFO: ExecWithOptions {Command:[/bin/sh -c echo 'hostName' | nc -w 1 -u 10.244.1.120 8081 | grep -v '^\s*$'] Namespace:e2e-tests-pod-network-test-lghth PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 6 11:00:20.911: INFO: >>> kubeConfig: /root/.kube/config I0506 11:00:20.946947 7 log.go:172] (0xc000b9efd0) (0xc00206eaa0) Create stream I0506 11:00:20.946985 7 log.go:172] (0xc000b9efd0) (0xc00206eaa0) Stream added, broadcasting: 1 I0506 11:00:20.949632 7 log.go:172] (0xc000b9efd0) Reply frame received for 1 I0506 11:00:20.949679 7 log.go:172] (0xc000b9efd0) (0xc00206eb40) Create stream I0506 11:00:20.949696 7 log.go:172] (0xc000b9efd0) (0xc00206eb40) Stream added, broadcasting: 3 I0506 11:00:20.950570 7 log.go:172] (0xc000b9efd0) Reply frame received for 3 I0506 11:00:20.950610 7 log.go:172] (0xc000b9efd0) (0xc00206ebe0) Create stream I0506 11:00:20.950624 7 log.go:172] (0xc000b9efd0) (0xc00206ebe0) Stream added, broadcasting: 5 I0506 11:00:20.951403 7 log.go:172] (0xc000b9efd0) Reply frame received for 5 I0506 11:00:22.064531 7 log.go:172] (0xc000b9efd0) Data frame received for 5 I0506 11:00:22.064594 7 log.go:172] (0xc00206ebe0) (5) Data frame handling I0506 11:00:22.064633 7 log.go:172] (0xc000b9efd0) Data frame received for 3 I0506 11:00:22.064647 7 log.go:172] (0xc00206eb40) (3) Data frame handling I0506 11:00:22.064657 7 log.go:172] (0xc00206eb40) (3) Data frame sent I0506 11:00:22.064665 7 log.go:172] (0xc000b9efd0) Data frame received for 3 I0506 11:00:22.064684 7 log.go:172] (0xc00206eb40) (3) Data frame handling I0506 11:00:22.068104 7 log.go:172] (0xc000b9efd0) Data frame received for 1 I0506 11:00:22.068142 7 log.go:172] (0xc00206eaa0) (1) Data frame handling I0506 11:00:22.068172 7 log.go:172] (0xc00206eaa0) (1) Data frame sent I0506 11:00:22.068988 7 log.go:172] (0xc000b9efd0) (0xc00206eaa0) Stream removed, broadcasting: 1 I0506 11:00:22.069352 7 log.go:172] (0xc000b9efd0) (0xc00206eaa0) Stream removed, broadcasting: 1 I0506 11:00:22.069370 7 log.go:172] (0xc000b9efd0) (0xc00206eb40) Stream removed, broadcasting: 3 I0506 11:00:22.069554 7 log.go:172] (0xc000b9efd0) (0xc00206ebe0) Stream removed, broadcasting: 5 May 6 11:00:22.070: INFO: Found all expected endpoints: [netserver-0] May 6 11:00:22.074: INFO: ExecWithOptions {Command:[/bin/sh -c echo 'hostName' | nc -w 1 -u 10.244.2.99 8081 | grep -v '^\s*$'] Namespace:e2e-tests-pod-network-test-lghth PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 6 11:00:22.074: INFO: >>> kubeConfig: /root/.kube/config I0506 11:00:22.096269 7 log.go:172] (0xc000a8c4d0) (0xc002047d60) Create stream I0506 11:00:22.096295 7 log.go:172] (0xc000a8c4d0) (0xc002047d60) Stream added, broadcasting: 1 I0506 11:00:22.098747 7 log.go:172] (0xc000a8c4d0) Reply frame received for 1 I0506 11:00:22.098784 7 log.go:172] (0xc000a8c4d0) (0xc001bbf7c0) Create stream I0506 11:00:22.098799 7 log.go:172] (0xc000a8c4d0) (0xc001bbf7c0) Stream added, broadcasting: 3 I0506 11:00:22.099701 7 log.go:172] (0xc000a8c4d0) Reply frame received for 3 I0506 11:00:22.099746 7 log.go:172] (0xc000a8c4d0) (0xc001cc30e0) Create stream I0506 11:00:22.099762 7 log.go:172] (0xc000a8c4d0) (0xc001cc30e0) Stream added, broadcasting: 5 I0506 11:00:22.100709 7 log.go:172] (0xc000a8c4d0) Reply frame received for 5 I0506 11:00:23.188803 7 log.go:172] (0xc000a8c4d0) Data frame received for 5 I0506 11:00:23.188841 7 log.go:172] (0xc001cc30e0) (5) Data frame handling I0506 11:00:23.188886 7 log.go:172] (0xc000a8c4d0) Data frame received for 3 I0506 11:00:23.188956 7 log.go:172] (0xc001bbf7c0) (3) Data frame handling I0506 11:00:23.188992 7 log.go:172] (0xc001bbf7c0) (3) Data frame sent I0506 11:00:23.189012 7 log.go:172] (0xc000a8c4d0) Data frame received for 3 I0506 11:00:23.189030 7 log.go:172] (0xc001bbf7c0) (3) Data frame handling I0506 11:00:23.191137 7 log.go:172] (0xc000a8c4d0) Data frame received for 1 I0506 11:00:23.191175 7 log.go:172] (0xc002047d60) (1) Data frame handling I0506 11:00:23.191192 7 log.go:172] (0xc002047d60) (1) Data frame sent I0506 11:00:23.191206 7 log.go:172] (0xc000a8c4d0) (0xc002047d60) Stream removed, broadcasting: 1 I0506 11:00:23.191282 7 log.go:172] (0xc000a8c4d0) (0xc002047d60) Stream removed, broadcasting: 1 I0506 11:00:23.191290 7 log.go:172] (0xc000a8c4d0) (0xc001bbf7c0) Stream removed, broadcasting: 3 I0506 11:00:23.191296 7 log.go:172] (0xc000a8c4d0) (0xc001cc30e0) Stream removed, broadcasting: 5 May 6 11:00:23.191: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 I0506 11:00:23.191336 7 log.go:172] (0xc000a8c4d0) Go away received May 6 11:00:23.191: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pod-network-test-lghth" for this suite. May 6 11:00:47.207: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 11:00:47.266: INFO: namespace: e2e-tests-pod-network-test-lghth, resource: bindings, ignored listing per whitelist May 6 11:00:47.275: INFO: namespace e2e-tests-pod-network-test-lghth deletion completed in 24.080137661s • [SLOW TEST:56.702 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for node-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 11:00:47.275: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 6 11:01:07.578: INFO: Container started at 2020-05-06 11:00:50 +0000 UTC, pod became ready at 2020-05-06 11:01:07 +0000 UTC [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 11:01:07.578: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-9nkbc" for this suite. May 6 11:01:29.596: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 11:01:29.610: INFO: namespace: e2e-tests-container-probe-9nkbc, resource: bindings, ignored listing per whitelist May 6 11:01:29.662: INFO: namespace e2e-tests-container-probe-9nkbc deletion completed in 22.079687513s • [SLOW TEST:42.387 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-network] Services should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 11:01:29.663: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:85 [It] should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating service endpoint-test2 in namespace e2e-tests-services-f6qb9 STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-f6qb9 to expose endpoints map[] May 6 11:01:29.860: INFO: Get endpoints failed (11.041772ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found May 6 11:01:30.864: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-f6qb9 exposes endpoints map[] (1.015158653s elapsed) STEP: Creating pod pod1 in namespace e2e-tests-services-f6qb9 STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-f6qb9 to expose endpoints map[pod1:[80]] May 6 11:01:35.438: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-f6qb9 exposes endpoints map[pod1:[80]] (4.56687883s elapsed) STEP: Creating pod pod2 in namespace e2e-tests-services-f6qb9 STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-f6qb9 to expose endpoints map[pod1:[80] pod2:[80]] May 6 11:01:39.664: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-f6qb9 exposes endpoints map[pod1:[80] pod2:[80]] (4.22222902s elapsed) STEP: Deleting pod pod1 in namespace e2e-tests-services-f6qb9 STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-f6qb9 to expose endpoints map[pod2:[80]] May 6 11:01:40.911: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-f6qb9 exposes endpoints map[pod2:[80]] (1.242625758s elapsed) STEP: Deleting pod pod2 in namespace e2e-tests-services-f6qb9 STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-f6qb9 to expose endpoints map[] May 6 11:01:41.926: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-f6qb9 exposes endpoints map[] (1.009024824s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 11:01:42.288: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-services-f6qb9" for this suite. May 6 11:02:06.316: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 11:02:06.773: INFO: namespace: e2e-tests-services-f6qb9, resource: bindings, ignored listing per whitelist May 6 11:02:06.787: INFO: namespace e2e-tests-services-f6qb9 deletion completed in 24.48516363s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:90 • [SLOW TEST:37.124 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 11:02:06.787: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a watch on configmaps with label A STEP: creating a watch on configmaps with label B STEP: creating a watch on configmaps with label A or B STEP: creating a configmap with label A and ensuring the correct watchers observe the notification May 6 11:02:07.215: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-b4wts,SelfLink:/api/v1/namespaces/e2e-tests-watch-b4wts/configmaps/e2e-watch-test-configmap-a,UID:06977f19-8f89-11ea-99e8-0242ac110002,ResourceVersion:9029960,Generation:0,CreationTimestamp:2020-05-06 11:02:07 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} May 6 11:02:07.216: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-b4wts,SelfLink:/api/v1/namespaces/e2e-tests-watch-b4wts/configmaps/e2e-watch-test-configmap-a,UID:06977f19-8f89-11ea-99e8-0242ac110002,ResourceVersion:9029960,Generation:0,CreationTimestamp:2020-05-06 11:02:07 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: modifying configmap A and ensuring the correct watchers observe the notification May 6 11:02:17.224: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-b4wts,SelfLink:/api/v1/namespaces/e2e-tests-watch-b4wts/configmaps/e2e-watch-test-configmap-a,UID:06977f19-8f89-11ea-99e8-0242ac110002,ResourceVersion:9029980,Generation:0,CreationTimestamp:2020-05-06 11:02:07 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} May 6 11:02:17.224: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-b4wts,SelfLink:/api/v1/namespaces/e2e-tests-watch-b4wts/configmaps/e2e-watch-test-configmap-a,UID:06977f19-8f89-11ea-99e8-0242ac110002,ResourceVersion:9029980,Generation:0,CreationTimestamp:2020-05-06 11:02:07 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying configmap A again and ensuring the correct watchers observe the notification May 6 11:02:27.230: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-b4wts,SelfLink:/api/v1/namespaces/e2e-tests-watch-b4wts/configmaps/e2e-watch-test-configmap-a,UID:06977f19-8f89-11ea-99e8-0242ac110002,ResourceVersion:9030000,Generation:0,CreationTimestamp:2020-05-06 11:02:07 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} May 6 11:02:27.230: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-b4wts,SelfLink:/api/v1/namespaces/e2e-tests-watch-b4wts/configmaps/e2e-watch-test-configmap-a,UID:06977f19-8f89-11ea-99e8-0242ac110002,ResourceVersion:9030000,Generation:0,CreationTimestamp:2020-05-06 11:02:07 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: deleting configmap A and ensuring the correct watchers observe the notification May 6 11:02:37.237: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-b4wts,SelfLink:/api/v1/namespaces/e2e-tests-watch-b4wts/configmaps/e2e-watch-test-configmap-a,UID:06977f19-8f89-11ea-99e8-0242ac110002,ResourceVersion:9030020,Generation:0,CreationTimestamp:2020-05-06 11:02:07 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} May 6 11:02:37.237: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-b4wts,SelfLink:/api/v1/namespaces/e2e-tests-watch-b4wts/configmaps/e2e-watch-test-configmap-a,UID:06977f19-8f89-11ea-99e8-0242ac110002,ResourceVersion:9030020,Generation:0,CreationTimestamp:2020-05-06 11:02:07 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: creating a configmap with label B and ensuring the correct watchers observe the notification May 6 11:02:47.246: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-b4wts,SelfLink:/api/v1/namespaces/e2e-tests-watch-b4wts/configmaps/e2e-watch-test-configmap-b,UID:1e73754e-8f89-11ea-99e8-0242ac110002,ResourceVersion:9030040,Generation:0,CreationTimestamp:2020-05-06 11:02:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} May 6 11:02:47.246: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-b4wts,SelfLink:/api/v1/namespaces/e2e-tests-watch-b4wts/configmaps/e2e-watch-test-configmap-b,UID:1e73754e-8f89-11ea-99e8-0242ac110002,ResourceVersion:9030040,Generation:0,CreationTimestamp:2020-05-06 11:02:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: deleting configmap B and ensuring the correct watchers observe the notification May 6 11:02:57.252: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-b4wts,SelfLink:/api/v1/namespaces/e2e-tests-watch-b4wts/configmaps/e2e-watch-test-configmap-b,UID:1e73754e-8f89-11ea-99e8-0242ac110002,ResourceVersion:9030060,Generation:0,CreationTimestamp:2020-05-06 11:02:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} May 6 11:02:57.252: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-b4wts,SelfLink:/api/v1/namespaces/e2e-tests-watch-b4wts/configmaps/e2e-watch-test-configmap-b,UID:1e73754e-8f89-11ea-99e8-0242ac110002,ResourceVersion:9030060,Generation:0,CreationTimestamp:2020-05-06 11:02:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 11:03:07.253: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-watch-b4wts" for this suite. May 6 11:03:13.273: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 11:03:13.334: INFO: namespace: e2e-tests-watch-b4wts, resource: bindings, ignored listing per whitelist May 6 11:03:13.341: INFO: namespace e2e-tests-watch-b4wts deletion completed in 6.083302381s • [SLOW TEST:66.554 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 11:03:13.341: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating replication controller my-hostname-basic-2e1adecb-8f89-11ea-b5fe-0242ac110017 May 6 11:03:13.547: INFO: Pod name my-hostname-basic-2e1adecb-8f89-11ea-b5fe-0242ac110017: Found 0 pods out of 1 May 6 11:03:18.552: INFO: Pod name my-hostname-basic-2e1adecb-8f89-11ea-b5fe-0242ac110017: Found 1 pods out of 1 May 6 11:03:18.552: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-2e1adecb-8f89-11ea-b5fe-0242ac110017" are running May 6 11:03:18.554: INFO: Pod "my-hostname-basic-2e1adecb-8f89-11ea-b5fe-0242ac110017-lhk9l" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-06 11:03:13 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-06 11:03:17 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-06 11:03:17 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-06 11:03:13 +0000 UTC Reason: Message:}]) May 6 11:03:18.554: INFO: Trying to dial the pod May 6 11:03:23.567: INFO: Controller my-hostname-basic-2e1adecb-8f89-11ea-b5fe-0242ac110017: Got expected result from replica 1 [my-hostname-basic-2e1adecb-8f89-11ea-b5fe-0242ac110017-lhk9l]: "my-hostname-basic-2e1adecb-8f89-11ea-b5fe-0242ac110017-lhk9l", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 11:03:23.567: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-replication-controller-cmzkh" for this suite. May 6 11:03:29.593: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 11:03:29.651: INFO: namespace: e2e-tests-replication-controller-cmzkh, resource: bindings, ignored listing per whitelist May 6 11:03:29.671: INFO: namespace e2e-tests-replication-controller-cmzkh deletion completed in 6.099117783s • [SLOW TEST:16.330 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 11:03:29.671: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Given a Pod with a 'name' label pod-adoption-release is created STEP: When a replicaset with a matching selector is created STEP: Then the orphan pod is adopted STEP: When the matched label of one of its pods change May 6 11:03:36.780: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 11:03:37.795: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-replicaset-z9nf9" for this suite. May 6 11:04:01.854: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 11:04:01.871: INFO: namespace: e2e-tests-replicaset-z9nf9, resource: bindings, ignored listing per whitelist May 6 11:04:01.948: INFO: namespace e2e-tests-replicaset-z9nf9 deletion completed in 24.149459767s • [SLOW TEST:32.276 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 11:04:01.948: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 11:04:02.135: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubelet-test-z4fbt" for this suite. May 6 11:04:08.150: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 11:04:08.161: INFO: namespace: e2e-tests-kubelet-test-z4fbt, resource: bindings, ignored listing per whitelist May 6 11:04:08.220: INFO: namespace e2e-tests-kubelet-test-z4fbt deletion completed in 6.080912191s • [SLOW TEST:6.273 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78 should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 11:04:08.221: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin May 6 11:04:08.341: INFO: Waiting up to 5m0s for pod "downwardapi-volume-4ec9bb61-8f89-11ea-b5fe-0242ac110017" in namespace "e2e-tests-downward-api-sthxm" to be "success or failure" May 6 11:04:08.349: INFO: Pod "downwardapi-volume-4ec9bb61-8f89-11ea-b5fe-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 7.027662ms May 6 11:04:10.501: INFO: Pod "downwardapi-volume-4ec9bb61-8f89-11ea-b5fe-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.159643608s May 6 11:04:12.505: INFO: Pod "downwardapi-volume-4ec9bb61-8f89-11ea-b5fe-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.163425446s STEP: Saw pod success May 6 11:04:12.505: INFO: Pod "downwardapi-volume-4ec9bb61-8f89-11ea-b5fe-0242ac110017" satisfied condition "success or failure" May 6 11:04:12.507: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-4ec9bb61-8f89-11ea-b5fe-0242ac110017 container client-container: STEP: delete the pod May 6 11:04:12.649: INFO: Waiting for pod downwardapi-volume-4ec9bb61-8f89-11ea-b5fe-0242ac110017 to disappear May 6 11:04:12.654: INFO: Pod downwardapi-volume-4ec9bb61-8f89-11ea-b5fe-0242ac110017 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 11:04:12.654: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-sthxm" for this suite. May 6 11:04:18.669: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 11:04:18.736: INFO: namespace: e2e-tests-downward-api-sthxm, resource: bindings, ignored listing per whitelist May 6 11:04:18.743: INFO: namespace e2e-tests-downward-api-sthxm deletion completed in 6.086392876s • [SLOW TEST:10.522 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 11:04:18.744: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0666 on node default medium May 6 11:04:18.843: INFO: Waiting up to 5m0s for pod "pod-550c2cb2-8f89-11ea-b5fe-0242ac110017" in namespace "e2e-tests-emptydir-4h7rt" to be "success or failure" May 6 11:04:18.859: INFO: Pod "pod-550c2cb2-8f89-11ea-b5fe-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 16.314356ms May 6 11:04:20.864: INFO: Pod "pod-550c2cb2-8f89-11ea-b5fe-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02068361s May 6 11:04:22.963: INFO: Pod "pod-550c2cb2-8f89-11ea-b5fe-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.119537444s STEP: Saw pod success May 6 11:04:22.963: INFO: Pod "pod-550c2cb2-8f89-11ea-b5fe-0242ac110017" satisfied condition "success or failure" May 6 11:04:22.966: INFO: Trying to get logs from node hunter-worker2 pod pod-550c2cb2-8f89-11ea-b5fe-0242ac110017 container test-container: STEP: delete the pod May 6 11:04:22.993: INFO: Waiting for pod pod-550c2cb2-8f89-11ea-b5fe-0242ac110017 to disappear May 6 11:04:23.112: INFO: Pod pod-550c2cb2-8f89-11ea-b5fe-0242ac110017 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 11:04:23.112: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-4h7rt" for this suite. May 6 11:04:29.157: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 11:04:29.225: INFO: namespace: e2e-tests-emptydir-4h7rt, resource: bindings, ignored listing per whitelist May 6 11:04:29.233: INFO: namespace e2e-tests-emptydir-4h7rt deletion completed in 6.109611681s • [SLOW TEST:10.490 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 11:04:29.234: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 11:04:33.378: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubelet-test-s4bmn" for this suite. May 6 11:04:39.512: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 11:04:39.540: INFO: namespace: e2e-tests-kubelet-test-s4bmn, resource: bindings, ignored listing per whitelist May 6 11:04:39.585: INFO: namespace e2e-tests-kubelet-test-s4bmn deletion completed in 6.202881931s • [SLOW TEST:10.351 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78 should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 11:04:39.585: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test substitution in container's args May 6 11:04:39.735: INFO: Waiting up to 5m0s for pod "var-expansion-617e9958-8f89-11ea-b5fe-0242ac110017" in namespace "e2e-tests-var-expansion-mqjxs" to be "success or failure" May 6 11:04:39.738: INFO: Pod "var-expansion-617e9958-8f89-11ea-b5fe-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 3.433156ms May 6 11:04:41.743: INFO: Pod "var-expansion-617e9958-8f89-11ea-b5fe-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007698821s May 6 11:04:43.746: INFO: Pod "var-expansion-617e9958-8f89-11ea-b5fe-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010948537s STEP: Saw pod success May 6 11:04:43.746: INFO: Pod "var-expansion-617e9958-8f89-11ea-b5fe-0242ac110017" satisfied condition "success or failure" May 6 11:04:43.748: INFO: Trying to get logs from node hunter-worker pod var-expansion-617e9958-8f89-11ea-b5fe-0242ac110017 container dapi-container: STEP: delete the pod May 6 11:04:43.805: INFO: Waiting for pod var-expansion-617e9958-8f89-11ea-b5fe-0242ac110017 to disappear May 6 11:04:43.812: INFO: Pod var-expansion-617e9958-8f89-11ea-b5fe-0242ac110017 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 11:04:43.812: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-var-expansion-mqjxs" for this suite. May 6 11:04:49.827: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 11:04:49.852: INFO: namespace: e2e-tests-var-expansion-mqjxs, resource: bindings, ignored listing per whitelist May 6 11:04:49.898: INFO: namespace e2e-tests-var-expansion-mqjxs deletion completed in 6.083496357s • [SLOW TEST:10.313 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 11:04:49.898: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating the pod May 6 11:04:54.568: INFO: Successfully updated pod "labelsupdate67a0f032-8f89-11ea-b5fe-0242ac110017" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 11:04:56.583: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-nw9zn" for this suite. May 6 11:05:18.601: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 11:05:18.607: INFO: namespace: e2e-tests-downward-api-nw9zn, resource: bindings, ignored listing per whitelist May 6 11:05:18.671: INFO: namespace e2e-tests-downward-api-nw9zn deletion completed in 22.083936541s • [SLOW TEST:28.772 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 11:05:18.671: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 6 11:05:18.809: INFO: Creating ReplicaSet my-hostname-basic-78cb29f6-8f89-11ea-b5fe-0242ac110017 May 6 11:05:18.865: INFO: Pod name my-hostname-basic-78cb29f6-8f89-11ea-b5fe-0242ac110017: Found 0 pods out of 1 May 6 11:05:23.874: INFO: Pod name my-hostname-basic-78cb29f6-8f89-11ea-b5fe-0242ac110017: Found 1 pods out of 1 May 6 11:05:23.874: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-78cb29f6-8f89-11ea-b5fe-0242ac110017" is running May 6 11:05:23.878: INFO: Pod "my-hostname-basic-78cb29f6-8f89-11ea-b5fe-0242ac110017-bq4vh" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-06 11:05:18 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-06 11:05:21 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-06 11:05:21 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-06 11:05:18 +0000 UTC Reason: Message:}]) May 6 11:05:23.878: INFO: Trying to dial the pod May 6 11:05:28.890: INFO: Controller my-hostname-basic-78cb29f6-8f89-11ea-b5fe-0242ac110017: Got expected result from replica 1 [my-hostname-basic-78cb29f6-8f89-11ea-b5fe-0242ac110017-bq4vh]: "my-hostname-basic-78cb29f6-8f89-11ea-b5fe-0242ac110017-bq4vh", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 11:05:28.890: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-replicaset-s99cn" for this suite. May 6 11:05:34.918: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 11:05:34.929: INFO: namespace: e2e-tests-replicaset-s99cn, resource: bindings, ignored listing per whitelist May 6 11:05:35.012: INFO: namespace e2e-tests-replicaset-s99cn deletion completed in 6.118013867s • [SLOW TEST:16.341 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 11:05:35.013: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-gffmc May 6 11:05:39.283: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-gffmc STEP: checking the pod's current state and verifying that restartCount is present May 6 11:05:39.285: INFO: Initial restart count of pod liveness-http is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 11:09:40.411: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-gffmc" for this suite. May 6 11:09:46.432: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 11:09:46.505: INFO: namespace: e2e-tests-container-probe-gffmc, resource: bindings, ignored listing per whitelist May 6 11:09:46.514: INFO: namespace e2e-tests-container-probe-gffmc deletion completed in 6.091198548s • [SLOW TEST:251.501 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 11:09:46.514: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts STEP: Waiting for a default service account to be provisioned in namespace [It] should test kubelet managed /etc/hosts file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Setting up the test STEP: Creating hostNetwork=false pod STEP: Creating hostNetwork=true pod STEP: Running the test STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false May 6 11:09:56.687: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-7h9t4 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 6 11:09:56.687: INFO: >>> kubeConfig: /root/.kube/config I0506 11:09:56.724866 7 log.go:172] (0xc000b9efd0) (0xc0019feaa0) Create stream I0506 11:09:56.724906 7 log.go:172] (0xc000b9efd0) (0xc0019feaa0) Stream added, broadcasting: 1 I0506 11:09:56.727346 7 log.go:172] (0xc000b9efd0) Reply frame received for 1 I0506 11:09:56.727396 7 log.go:172] (0xc000b9efd0) (0xc0016f6000) Create stream I0506 11:09:56.727418 7 log.go:172] (0xc000b9efd0) (0xc0016f6000) Stream added, broadcasting: 3 I0506 11:09:56.728415 7 log.go:172] (0xc000b9efd0) Reply frame received for 3 I0506 11:09:56.728452 7 log.go:172] (0xc000b9efd0) (0xc00206ed20) Create stream I0506 11:09:56.728469 7 log.go:172] (0xc000b9efd0) (0xc00206ed20) Stream added, broadcasting: 5 I0506 11:09:56.729699 7 log.go:172] (0xc000b9efd0) Reply frame received for 5 I0506 11:09:56.813992 7 log.go:172] (0xc000b9efd0) Data frame received for 3 I0506 11:09:56.814024 7 log.go:172] (0xc0016f6000) (3) Data frame handling I0506 11:09:56.814033 7 log.go:172] (0xc0016f6000) (3) Data frame sent I0506 11:09:56.814039 7 log.go:172] (0xc000b9efd0) Data frame received for 3 I0506 11:09:56.814051 7 log.go:172] (0xc0016f6000) (3) Data frame handling I0506 11:09:56.814079 7 log.go:172] (0xc000b9efd0) Data frame received for 5 I0506 11:09:56.814087 7 log.go:172] (0xc00206ed20) (5) Data frame handling I0506 11:09:56.815696 7 log.go:172] (0xc000b9efd0) Data frame received for 1 I0506 11:09:56.815752 7 log.go:172] (0xc0019feaa0) (1) Data frame handling I0506 11:09:56.815808 7 log.go:172] (0xc0019feaa0) (1) Data frame sent I0506 11:09:56.815832 7 log.go:172] (0xc000b9efd0) (0xc0019feaa0) Stream removed, broadcasting: 1 I0506 11:09:56.815861 7 log.go:172] (0xc000b9efd0) Go away received I0506 11:09:56.815947 7 log.go:172] (0xc000b9efd0) (0xc0019feaa0) Stream removed, broadcasting: 1 I0506 11:09:56.815965 7 log.go:172] (0xc000b9efd0) (0xc0016f6000) Stream removed, broadcasting: 3 I0506 11:09:56.815973 7 log.go:172] (0xc000b9efd0) (0xc00206ed20) Stream removed, broadcasting: 5 May 6 11:09:56.815: INFO: Exec stderr: "" May 6 11:09:56.816: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-7h9t4 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 6 11:09:56.816: INFO: >>> kubeConfig: /root/.kube/config I0506 11:09:56.844472 7 log.go:172] (0xc001a6e2c0) (0xc00206efa0) Create stream I0506 11:09:56.844506 7 log.go:172] (0xc001a6e2c0) (0xc00206efa0) Stream added, broadcasting: 1 I0506 11:09:56.847491 7 log.go:172] (0xc001a6e2c0) Reply frame received for 1 I0506 11:09:56.847535 7 log.go:172] (0xc001a6e2c0) (0xc00185f220) Create stream I0506 11:09:56.847555 7 log.go:172] (0xc001a6e2c0) (0xc00185f220) Stream added, broadcasting: 3 I0506 11:09:56.848404 7 log.go:172] (0xc001a6e2c0) Reply frame received for 3 I0506 11:09:56.848429 7 log.go:172] (0xc001a6e2c0) (0xc0019feb40) Create stream I0506 11:09:56.848438 7 log.go:172] (0xc001a6e2c0) (0xc0019feb40) Stream added, broadcasting: 5 I0506 11:09:56.849417 7 log.go:172] (0xc001a6e2c0) Reply frame received for 5 I0506 11:09:56.918051 7 log.go:172] (0xc001a6e2c0) Data frame received for 5 I0506 11:09:56.918093 7 log.go:172] (0xc0019feb40) (5) Data frame handling I0506 11:09:56.918136 7 log.go:172] (0xc001a6e2c0) Data frame received for 3 I0506 11:09:56.918176 7 log.go:172] (0xc00185f220) (3) Data frame handling I0506 11:09:56.918203 7 log.go:172] (0xc00185f220) (3) Data frame sent I0506 11:09:56.918219 7 log.go:172] (0xc001a6e2c0) Data frame received for 3 I0506 11:09:56.918232 7 log.go:172] (0xc00185f220) (3) Data frame handling I0506 11:09:56.919731 7 log.go:172] (0xc001a6e2c0) Data frame received for 1 I0506 11:09:56.919757 7 log.go:172] (0xc00206efa0) (1) Data frame handling I0506 11:09:56.919770 7 log.go:172] (0xc00206efa0) (1) Data frame sent I0506 11:09:56.919781 7 log.go:172] (0xc001a6e2c0) (0xc00206efa0) Stream removed, broadcasting: 1 I0506 11:09:56.919797 7 log.go:172] (0xc001a6e2c0) Go away received I0506 11:09:56.919923 7 log.go:172] (0xc001a6e2c0) (0xc00206efa0) Stream removed, broadcasting: 1 I0506 11:09:56.919953 7 log.go:172] (0xc001a6e2c0) (0xc00185f220) Stream removed, broadcasting: 3 I0506 11:09:56.919966 7 log.go:172] (0xc001a6e2c0) (0xc0019feb40) Stream removed, broadcasting: 5 May 6 11:09:56.919: INFO: Exec stderr: "" May 6 11:09:56.920: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-7h9t4 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 6 11:09:56.920: INFO: >>> kubeConfig: /root/.kube/config I0506 11:09:56.958079 7 log.go:172] (0xc001e742c0) (0xc001bb83c0) Create stream I0506 11:09:56.958102 7 log.go:172] (0xc001e742c0) (0xc001bb83c0) Stream added, broadcasting: 1 I0506 11:09:56.961488 7 log.go:172] (0xc001e742c0) Reply frame received for 1 I0506 11:09:56.961521 7 log.go:172] (0xc001e742c0) (0xc00185f2c0) Create stream I0506 11:09:56.961532 7 log.go:172] (0xc001e742c0) (0xc00185f2c0) Stream added, broadcasting: 3 I0506 11:09:56.962643 7 log.go:172] (0xc001e742c0) Reply frame received for 3 I0506 11:09:56.962671 7 log.go:172] (0xc001e742c0) (0xc001bb8460) Create stream I0506 11:09:56.962682 7 log.go:172] (0xc001e742c0) (0xc001bb8460) Stream added, broadcasting: 5 I0506 11:09:56.963582 7 log.go:172] (0xc001e742c0) Reply frame received for 5 I0506 11:09:57.022172 7 log.go:172] (0xc001e742c0) Data frame received for 5 I0506 11:09:57.022214 7 log.go:172] (0xc001bb8460) (5) Data frame handling I0506 11:09:57.022247 7 log.go:172] (0xc001e742c0) Data frame received for 3 I0506 11:09:57.022274 7 log.go:172] (0xc00185f2c0) (3) Data frame handling I0506 11:09:57.022293 7 log.go:172] (0xc00185f2c0) (3) Data frame sent I0506 11:09:57.022308 7 log.go:172] (0xc001e742c0) Data frame received for 3 I0506 11:09:57.022318 7 log.go:172] (0xc00185f2c0) (3) Data frame handling I0506 11:09:57.024284 7 log.go:172] (0xc001e742c0) Data frame received for 1 I0506 11:09:57.024327 7 log.go:172] (0xc001bb83c0) (1) Data frame handling I0506 11:09:57.024348 7 log.go:172] (0xc001bb83c0) (1) Data frame sent I0506 11:09:57.024499 7 log.go:172] (0xc001e742c0) (0xc001bb83c0) Stream removed, broadcasting: 1 I0506 11:09:57.024628 7 log.go:172] (0xc001e742c0) (0xc001bb83c0) Stream removed, broadcasting: 1 I0506 11:09:57.024651 7 log.go:172] (0xc001e742c0) Go away received I0506 11:09:57.024687 7 log.go:172] (0xc001e742c0) (0xc00185f2c0) Stream removed, broadcasting: 3 I0506 11:09:57.024727 7 log.go:172] (0xc001e742c0) (0xc001bb8460) Stream removed, broadcasting: 5 May 6 11:09:57.024: INFO: Exec stderr: "" May 6 11:09:57.024: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-7h9t4 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 6 11:09:57.024: INFO: >>> kubeConfig: /root/.kube/config I0506 11:09:57.061943 7 log.go:172] (0xc001e74790) (0xc001bb8780) Create stream I0506 11:09:57.061967 7 log.go:172] (0xc001e74790) (0xc001bb8780) Stream added, broadcasting: 1 I0506 11:09:57.067797 7 log.go:172] (0xc001e74790) Reply frame received for 1 I0506 11:09:57.067867 7 log.go:172] (0xc001e74790) (0xc00185f400) Create stream I0506 11:09:57.067888 7 log.go:172] (0xc001e74790) (0xc00185f400) Stream added, broadcasting: 3 I0506 11:09:57.073956 7 log.go:172] (0xc001e74790) Reply frame received for 3 I0506 11:09:57.074035 7 log.go:172] (0xc001e74790) (0xc0016f60a0) Create stream I0506 11:09:57.074068 7 log.go:172] (0xc001e74790) (0xc0016f60a0) Stream added, broadcasting: 5 I0506 11:09:57.075391 7 log.go:172] (0xc001e74790) Reply frame received for 5 I0506 11:09:57.141809 7 log.go:172] (0xc001e74790) Data frame received for 5 I0506 11:09:57.141841 7 log.go:172] (0xc0016f60a0) (5) Data frame handling I0506 11:09:57.141890 7 log.go:172] (0xc001e74790) Data frame received for 3 I0506 11:09:57.141943 7 log.go:172] (0xc00185f400) (3) Data frame handling I0506 11:09:57.141986 7 log.go:172] (0xc00185f400) (3) Data frame sent I0506 11:09:57.142012 7 log.go:172] (0xc001e74790) Data frame received for 3 I0506 11:09:57.142034 7 log.go:172] (0xc00185f400) (3) Data frame handling I0506 11:09:57.143401 7 log.go:172] (0xc001e74790) Data frame received for 1 I0506 11:09:57.143440 7 log.go:172] (0xc001bb8780) (1) Data frame handling I0506 11:09:57.143462 7 log.go:172] (0xc001bb8780) (1) Data frame sent I0506 11:09:57.143484 7 log.go:172] (0xc001e74790) (0xc001bb8780) Stream removed, broadcasting: 1 I0506 11:09:57.143521 7 log.go:172] (0xc001e74790) Go away received I0506 11:09:57.143674 7 log.go:172] (0xc001e74790) (0xc001bb8780) Stream removed, broadcasting: 1 I0506 11:09:57.143702 7 log.go:172] (0xc001e74790) (0xc00185f400) Stream removed, broadcasting: 3 I0506 11:09:57.143715 7 log.go:172] (0xc001e74790) (0xc0016f60a0) Stream removed, broadcasting: 5 May 6 11:09:57.143: INFO: Exec stderr: "" STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount May 6 11:09:57.143: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-7h9t4 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 6 11:09:57.143: INFO: >>> kubeConfig: /root/.kube/config I0506 11:09:57.179293 7 log.go:172] (0xc001a6e790) (0xc00206f220) Create stream I0506 11:09:57.179327 7 log.go:172] (0xc001a6e790) (0xc00206f220) Stream added, broadcasting: 1 I0506 11:09:57.181514 7 log.go:172] (0xc001a6e790) Reply frame received for 1 I0506 11:09:57.181556 7 log.go:172] (0xc001a6e790) (0xc0019febe0) Create stream I0506 11:09:57.181572 7 log.go:172] (0xc001a6e790) (0xc0019febe0) Stream added, broadcasting: 3 I0506 11:09:57.182556 7 log.go:172] (0xc001a6e790) Reply frame received for 3 I0506 11:09:57.182590 7 log.go:172] (0xc001a6e790) (0xc00185f4a0) Create stream I0506 11:09:57.182603 7 log.go:172] (0xc001a6e790) (0xc00185f4a0) Stream added, broadcasting: 5 I0506 11:09:57.183472 7 log.go:172] (0xc001a6e790) Reply frame received for 5 I0506 11:09:57.246691 7 log.go:172] (0xc001a6e790) Data frame received for 5 I0506 11:09:57.246779 7 log.go:172] (0xc00185f4a0) (5) Data frame handling I0506 11:09:57.246833 7 log.go:172] (0xc001a6e790) Data frame received for 3 I0506 11:09:57.246867 7 log.go:172] (0xc0019febe0) (3) Data frame handling I0506 11:09:57.246900 7 log.go:172] (0xc0019febe0) (3) Data frame sent I0506 11:09:57.246912 7 log.go:172] (0xc001a6e790) Data frame received for 3 I0506 11:09:57.246938 7 log.go:172] (0xc0019febe0) (3) Data frame handling I0506 11:09:57.248293 7 log.go:172] (0xc001a6e790) Data frame received for 1 I0506 11:09:57.248350 7 log.go:172] (0xc00206f220) (1) Data frame handling I0506 11:09:57.248413 7 log.go:172] (0xc00206f220) (1) Data frame sent I0506 11:09:57.248491 7 log.go:172] (0xc001a6e790) (0xc00206f220) Stream removed, broadcasting: 1 I0506 11:09:57.248574 7 log.go:172] (0xc001a6e790) Go away received I0506 11:09:57.248637 7 log.go:172] (0xc001a6e790) (0xc00206f220) Stream removed, broadcasting: 1 I0506 11:09:57.248689 7 log.go:172] (0xc001a6e790) (0xc0019febe0) Stream removed, broadcasting: 3 I0506 11:09:57.248712 7 log.go:172] (0xc001a6e790) (0xc00185f4a0) Stream removed, broadcasting: 5 May 6 11:09:57.248: INFO: Exec stderr: "" May 6 11:09:57.248: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-7h9t4 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 6 11:09:57.248: INFO: >>> kubeConfig: /root/.kube/config I0506 11:09:57.281585 7 log.go:172] (0xc001a6ec60) (0xc00206f400) Create stream I0506 11:09:57.281642 7 log.go:172] (0xc001a6ec60) (0xc00206f400) Stream added, broadcasting: 1 I0506 11:09:57.284395 7 log.go:172] (0xc001a6ec60) Reply frame received for 1 I0506 11:09:57.284466 7 log.go:172] (0xc001a6ec60) (0xc00185f5e0) Create stream I0506 11:09:57.284494 7 log.go:172] (0xc001a6ec60) (0xc00185f5e0) Stream added, broadcasting: 3 I0506 11:09:57.285920 7 log.go:172] (0xc001a6ec60) Reply frame received for 3 I0506 11:09:57.285948 7 log.go:172] (0xc001a6ec60) (0xc00185f680) Create stream I0506 11:09:57.285960 7 log.go:172] (0xc001a6ec60) (0xc00185f680) Stream added, broadcasting: 5 I0506 11:09:57.287067 7 log.go:172] (0xc001a6ec60) Reply frame received for 5 I0506 11:09:57.341768 7 log.go:172] (0xc001a6ec60) Data frame received for 3 I0506 11:09:57.341792 7 log.go:172] (0xc00185f5e0) (3) Data frame handling I0506 11:09:57.341800 7 log.go:172] (0xc00185f5e0) (3) Data frame sent I0506 11:09:57.341804 7 log.go:172] (0xc001a6ec60) Data frame received for 3 I0506 11:09:57.341813 7 log.go:172] (0xc00185f5e0) (3) Data frame handling I0506 11:09:57.341832 7 log.go:172] (0xc001a6ec60) Data frame received for 5 I0506 11:09:57.341838 7 log.go:172] (0xc00185f680) (5) Data frame handling I0506 11:09:57.343309 7 log.go:172] (0xc001a6ec60) Data frame received for 1 I0506 11:09:57.343324 7 log.go:172] (0xc00206f400) (1) Data frame handling I0506 11:09:57.343333 7 log.go:172] (0xc00206f400) (1) Data frame sent I0506 11:09:57.343346 7 log.go:172] (0xc001a6ec60) (0xc00206f400) Stream removed, broadcasting: 1 I0506 11:09:57.343411 7 log.go:172] (0xc001a6ec60) Go away received I0506 11:09:57.343444 7 log.go:172] (0xc001a6ec60) (0xc00206f400) Stream removed, broadcasting: 1 I0506 11:09:57.343460 7 log.go:172] (0xc001a6ec60) (0xc00185f5e0) Stream removed, broadcasting: 3 I0506 11:09:57.343472 7 log.go:172] (0xc001a6ec60) (0xc00185f680) Stream removed, broadcasting: 5 May 6 11:09:57.343: INFO: Exec stderr: "" STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true May 6 11:09:57.343: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-7h9t4 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 6 11:09:57.343: INFO: >>> kubeConfig: /root/.kube/config I0506 11:09:57.366116 7 log.go:172] (0xc001a6f130) (0xc00206f5e0) Create stream I0506 11:09:57.366141 7 log.go:172] (0xc001a6f130) (0xc00206f5e0) Stream added, broadcasting: 1 I0506 11:09:57.367598 7 log.go:172] (0xc001a6f130) Reply frame received for 1 I0506 11:09:57.367623 7 log.go:172] (0xc001a6f130) (0xc00185f720) Create stream I0506 11:09:57.367632 7 log.go:172] (0xc001a6f130) (0xc00185f720) Stream added, broadcasting: 3 I0506 11:09:57.368346 7 log.go:172] (0xc001a6f130) Reply frame received for 3 I0506 11:09:57.368378 7 log.go:172] (0xc001a6f130) (0xc00206f680) Create stream I0506 11:09:57.368391 7 log.go:172] (0xc001a6f130) (0xc00206f680) Stream added, broadcasting: 5 I0506 11:09:57.369103 7 log.go:172] (0xc001a6f130) Reply frame received for 5 I0506 11:09:57.491581 7 log.go:172] (0xc001a6f130) Data frame received for 5 I0506 11:09:57.491620 7 log.go:172] (0xc00206f680) (5) Data frame handling I0506 11:09:57.491650 7 log.go:172] (0xc001a6f130) Data frame received for 3 I0506 11:09:57.491684 7 log.go:172] (0xc00185f720) (3) Data frame handling I0506 11:09:57.491714 7 log.go:172] (0xc00185f720) (3) Data frame sent I0506 11:09:57.491728 7 log.go:172] (0xc001a6f130) Data frame received for 3 I0506 11:09:57.491741 7 log.go:172] (0xc00185f720) (3) Data frame handling I0506 11:09:57.493023 7 log.go:172] (0xc001a6f130) Data frame received for 1 I0506 11:09:57.493059 7 log.go:172] (0xc00206f5e0) (1) Data frame handling I0506 11:09:57.493089 7 log.go:172] (0xc00206f5e0) (1) Data frame sent I0506 11:09:57.493329 7 log.go:172] (0xc001a6f130) (0xc00206f5e0) Stream removed, broadcasting: 1 I0506 11:09:57.493363 7 log.go:172] (0xc001a6f130) Go away received I0506 11:09:57.493629 7 log.go:172] (0xc001a6f130) (0xc00206f5e0) Stream removed, broadcasting: 1 I0506 11:09:57.493659 7 log.go:172] (0xc001a6f130) (0xc00185f720) Stream removed, broadcasting: 3 I0506 11:09:57.493680 7 log.go:172] (0xc001a6f130) (0xc00206f680) Stream removed, broadcasting: 5 May 6 11:09:57.493: INFO: Exec stderr: "" May 6 11:09:57.493: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-7h9t4 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 6 11:09:57.493: INFO: >>> kubeConfig: /root/.kube/config I0506 11:09:57.522896 7 log.go:172] (0xc001a6f600) (0xc00206f9a0) Create stream I0506 11:09:57.522923 7 log.go:172] (0xc001a6f600) (0xc00206f9a0) Stream added, broadcasting: 1 I0506 11:09:57.524771 7 log.go:172] (0xc001a6f600) Reply frame received for 1 I0506 11:09:57.524797 7 log.go:172] (0xc001a6f600) (0xc00206fae0) Create stream I0506 11:09:57.524807 7 log.go:172] (0xc001a6f600) (0xc00206fae0) Stream added, broadcasting: 3 I0506 11:09:57.525658 7 log.go:172] (0xc001a6f600) Reply frame received for 3 I0506 11:09:57.525693 7 log.go:172] (0xc001a6f600) (0xc00185f7c0) Create stream I0506 11:09:57.525704 7 log.go:172] (0xc001a6f600) (0xc00185f7c0) Stream added, broadcasting: 5 I0506 11:09:57.526429 7 log.go:172] (0xc001a6f600) Reply frame received for 5 I0506 11:09:57.584638 7 log.go:172] (0xc001a6f600) Data frame received for 5 I0506 11:09:57.584677 7 log.go:172] (0xc00185f7c0) (5) Data frame handling I0506 11:09:57.584703 7 log.go:172] (0xc001a6f600) Data frame received for 3 I0506 11:09:57.584716 7 log.go:172] (0xc00206fae0) (3) Data frame handling I0506 11:09:57.584728 7 log.go:172] (0xc00206fae0) (3) Data frame sent I0506 11:09:57.584741 7 log.go:172] (0xc001a6f600) Data frame received for 3 I0506 11:09:57.584754 7 log.go:172] (0xc00206fae0) (3) Data frame handling I0506 11:09:57.586586 7 log.go:172] (0xc001a6f600) Data frame received for 1 I0506 11:09:57.586619 7 log.go:172] (0xc00206f9a0) (1) Data frame handling I0506 11:09:57.586639 7 log.go:172] (0xc00206f9a0) (1) Data frame sent I0506 11:09:57.586693 7 log.go:172] (0xc001a6f600) (0xc00206f9a0) Stream removed, broadcasting: 1 I0506 11:09:57.586719 7 log.go:172] (0xc001a6f600) Go away received I0506 11:09:57.586876 7 log.go:172] (0xc001a6f600) (0xc00206f9a0) Stream removed, broadcasting: 1 I0506 11:09:57.586913 7 log.go:172] (0xc001a6f600) (0xc00206fae0) Stream removed, broadcasting: 3 I0506 11:09:57.586939 7 log.go:172] (0xc001a6f600) (0xc00185f7c0) Stream removed, broadcasting: 5 May 6 11:09:57.586: INFO: Exec stderr: "" May 6 11:09:57.587: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-7h9t4 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 6 11:09:57.587: INFO: >>> kubeConfig: /root/.kube/config I0506 11:09:57.615123 7 log.go:172] (0xc001e74c60) (0xc001bb8aa0) Create stream I0506 11:09:57.615147 7 log.go:172] (0xc001e74c60) (0xc001bb8aa0) Stream added, broadcasting: 1 I0506 11:09:57.617957 7 log.go:172] (0xc001e74c60) Reply frame received for 1 I0506 11:09:57.618000 7 log.go:172] (0xc001e74c60) (0xc001bb8b40) Create stream I0506 11:09:57.618011 7 log.go:172] (0xc001e74c60) (0xc001bb8b40) Stream added, broadcasting: 3 I0506 11:09:57.618887 7 log.go:172] (0xc001e74c60) Reply frame received for 3 I0506 11:09:57.618913 7 log.go:172] (0xc001e74c60) (0xc00185f860) Create stream I0506 11:09:57.618924 7 log.go:172] (0xc001e74c60) (0xc00185f860) Stream added, broadcasting: 5 I0506 11:09:57.619740 7 log.go:172] (0xc001e74c60) Reply frame received for 5 I0506 11:09:57.678260 7 log.go:172] (0xc001e74c60) Data frame received for 5 I0506 11:09:57.678298 7 log.go:172] (0xc001e74c60) Data frame received for 3 I0506 11:09:57.678336 7 log.go:172] (0xc001bb8b40) (3) Data frame handling I0506 11:09:57.678374 7 log.go:172] (0xc001bb8b40) (3) Data frame sent I0506 11:09:57.678411 7 log.go:172] (0xc001e74c60) Data frame received for 3 I0506 11:09:57.678423 7 log.go:172] (0xc001bb8b40) (3) Data frame handling I0506 11:09:57.678471 7 log.go:172] (0xc00185f860) (5) Data frame handling I0506 11:09:57.680295 7 log.go:172] (0xc001e74c60) Data frame received for 1 I0506 11:09:57.680336 7 log.go:172] (0xc001bb8aa0) (1) Data frame handling I0506 11:09:57.680366 7 log.go:172] (0xc001bb8aa0) (1) Data frame sent I0506 11:09:57.680386 7 log.go:172] (0xc001e74c60) (0xc001bb8aa0) Stream removed, broadcasting: 1 I0506 11:09:57.680414 7 log.go:172] (0xc001e74c60) Go away received I0506 11:09:57.680512 7 log.go:172] (0xc001e74c60) (0xc001bb8aa0) Stream removed, broadcasting: 1 I0506 11:09:57.680529 7 log.go:172] (0xc001e74c60) (0xc001bb8b40) Stream removed, broadcasting: 3 I0506 11:09:57.680535 7 log.go:172] (0xc001e74c60) (0xc00185f860) Stream removed, broadcasting: 5 May 6 11:09:57.680: INFO: Exec stderr: "" May 6 11:09:57.680: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-7h9t4 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 6 11:09:57.680: INFO: >>> kubeConfig: /root/.kube/config I0506 11:09:57.709564 7 log.go:172] (0xc0006ebad0) (0xc00248a000) Create stream I0506 11:09:57.709594 7 log.go:172] (0xc0006ebad0) (0xc00248a000) Stream added, broadcasting: 1 I0506 11:09:57.711589 7 log.go:172] (0xc0006ebad0) Reply frame received for 1 I0506 11:09:57.711630 7 log.go:172] (0xc0006ebad0) (0xc002190000) Create stream I0506 11:09:57.711644 7 log.go:172] (0xc0006ebad0) (0xc002190000) Stream added, broadcasting: 3 I0506 11:09:57.712263 7 log.go:172] (0xc0006ebad0) Reply frame received for 3 I0506 11:09:57.712308 7 log.go:172] (0xc0006ebad0) (0xc001bb80a0) Create stream I0506 11:09:57.712328 7 log.go:172] (0xc0006ebad0) (0xc001bb80a0) Stream added, broadcasting: 5 I0506 11:09:57.713011 7 log.go:172] (0xc0006ebad0) Reply frame received for 5 I0506 11:09:57.764377 7 log.go:172] (0xc0006ebad0) Data frame received for 5 I0506 11:09:57.764428 7 log.go:172] (0xc001bb80a0) (5) Data frame handling I0506 11:09:57.764466 7 log.go:172] (0xc0006ebad0) Data frame received for 3 I0506 11:09:57.764482 7 log.go:172] (0xc002190000) (3) Data frame handling I0506 11:09:57.764493 7 log.go:172] (0xc002190000) (3) Data frame sent I0506 11:09:57.764509 7 log.go:172] (0xc0006ebad0) Data frame received for 3 I0506 11:09:57.764518 7 log.go:172] (0xc002190000) (3) Data frame handling I0506 11:09:57.766265 7 log.go:172] (0xc0006ebad0) Data frame received for 1 I0506 11:09:57.766293 7 log.go:172] (0xc00248a000) (1) Data frame handling I0506 11:09:57.766313 7 log.go:172] (0xc00248a000) (1) Data frame sent I0506 11:09:57.766334 7 log.go:172] (0xc0006ebad0) (0xc00248a000) Stream removed, broadcasting: 1 I0506 11:09:57.766354 7 log.go:172] (0xc0006ebad0) Go away received I0506 11:09:57.766560 7 log.go:172] (0xc0006ebad0) (0xc00248a000) Stream removed, broadcasting: 1 I0506 11:09:57.766599 7 log.go:172] (0xc0006ebad0) (0xc002190000) Stream removed, broadcasting: 3 I0506 11:09:57.766617 7 log.go:172] (0xc0006ebad0) (0xc001bb80a0) Stream removed, broadcasting: 5 May 6 11:09:57.766: INFO: Exec stderr: "" [AfterEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 11:09:57.766: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-e2e-kubelet-etc-hosts-7h9t4" for this suite. May 6 11:10:43.784: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 11:10:43.839: INFO: namespace: e2e-tests-e2e-kubelet-etc-hosts-7h9t4, resource: bindings, ignored listing per whitelist May 6 11:10:43.858: INFO: namespace e2e-tests-e2e-kubelet-etc-hosts-7h9t4 deletion completed in 46.086567663s • [SLOW TEST:57.344 seconds] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should test kubelet managed /etc/hosts file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 11:10:43.858: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:295 [It] should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a replication controller May 6 11:10:44.005: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-z5mpx' May 6 11:10:46.516: INFO: stderr: "" May 6 11:10:46.516: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. May 6 11:10:46.516: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-z5mpx' May 6 11:10:46.624: INFO: stderr: "" May 6 11:10:46.624: INFO: stdout: "update-demo-nautilus-flq5s update-demo-nautilus-jn8f8 " May 6 11:10:46.624: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-flq5s -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-z5mpx' May 6 11:10:46.741: INFO: stderr: "" May 6 11:10:46.741: INFO: stdout: "" May 6 11:10:46.741: INFO: update-demo-nautilus-flq5s is created but not running May 6 11:10:51.741: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-z5mpx' May 6 11:10:51.842: INFO: stderr: "" May 6 11:10:51.842: INFO: stdout: "update-demo-nautilus-flq5s update-demo-nautilus-jn8f8 " May 6 11:10:51.842: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-flq5s -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-z5mpx' May 6 11:10:51.948: INFO: stderr: "" May 6 11:10:51.948: INFO: stdout: "true" May 6 11:10:51.949: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-flq5s -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-z5mpx' May 6 11:10:52.051: INFO: stderr: "" May 6 11:10:52.051: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 6 11:10:52.051: INFO: validating pod update-demo-nautilus-flq5s May 6 11:10:52.055: INFO: got data: { "image": "nautilus.jpg" } May 6 11:10:52.055: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 6 11:10:52.055: INFO: update-demo-nautilus-flq5s is verified up and running May 6 11:10:52.055: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jn8f8 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-z5mpx' May 6 11:10:52.160: INFO: stderr: "" May 6 11:10:52.160: INFO: stdout: "true" May 6 11:10:52.160: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jn8f8 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-z5mpx' May 6 11:10:52.258: INFO: stderr: "" May 6 11:10:52.258: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 6 11:10:52.258: INFO: validating pod update-demo-nautilus-jn8f8 May 6 11:10:52.262: INFO: got data: { "image": "nautilus.jpg" } May 6 11:10:52.262: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 6 11:10:52.262: INFO: update-demo-nautilus-jn8f8 is verified up and running STEP: using delete to clean up resources May 6 11:10:52.262: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-z5mpx' May 6 11:10:52.410: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 6 11:10:52.410: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" May 6 11:10:52.410: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=e2e-tests-kubectl-z5mpx' May 6 11:10:52.512: INFO: stderr: "No resources found.\n" May 6 11:10:52.512: INFO: stdout: "" May 6 11:10:52.512: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=e2e-tests-kubectl-z5mpx -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 6 11:10:52.754: INFO: stderr: "" May 6 11:10:52.754: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 11:10:52.754: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-z5mpx" for this suite. May 6 11:11:14.791: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 11:11:14.826: INFO: namespace: e2e-tests-kubectl-z5mpx, resource: bindings, ignored listing per whitelist May 6 11:11:14.858: INFO: namespace e2e-tests-kubectl-z5mpx deletion completed in 22.100061768s • [SLOW TEST:31.000 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 11:11:14.858: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-upd-4d134849-8f8a-11ea-b5fe-0242ac110017 STEP: Creating the pod STEP: Waiting for pod with text data STEP: Waiting for pod with binary data [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 11:11:21.030: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-sr5f5" for this suite. May 6 11:11:43.050: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 11:11:43.059: INFO: namespace: e2e-tests-configmap-sr5f5, resource: bindings, ignored listing per whitelist May 6 11:11:43.123: INFO: namespace e2e-tests-configmap-sr5f5 deletion completed in 22.090020033s • [SLOW TEST:28.265 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 11:11:43.124: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 6 11:11:43.213: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 11:11:44.264: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-custom-resource-definition-fwc52" for this suite. May 6 11:11:50.284: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 11:11:50.301: INFO: namespace: e2e-tests-custom-resource-definition-fwc52, resource: bindings, ignored listing per whitelist May 6 11:11:50.345: INFO: namespace e2e-tests-custom-resource-definition-fwc52 deletion completed in 6.076222985s • [SLOW TEST:7.221 seconds] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 Simple CustomResourceDefinition /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:35 creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 11:11:50.345: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0777 on tmpfs May 6 11:11:50.462: INFO: Waiting up to 5m0s for pod "pod-62378096-8f8a-11ea-b5fe-0242ac110017" in namespace "e2e-tests-emptydir-npv4m" to be "success or failure" May 6 11:11:50.471: INFO: Pod "pod-62378096-8f8a-11ea-b5fe-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 8.795625ms May 6 11:11:52.476: INFO: Pod "pod-62378096-8f8a-11ea-b5fe-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013506065s May 6 11:11:54.480: INFO: Pod "pod-62378096-8f8a-11ea-b5fe-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.017832491s STEP: Saw pod success May 6 11:11:54.480: INFO: Pod "pod-62378096-8f8a-11ea-b5fe-0242ac110017" satisfied condition "success or failure" May 6 11:11:54.484: INFO: Trying to get logs from node hunter-worker pod pod-62378096-8f8a-11ea-b5fe-0242ac110017 container test-container: STEP: delete the pod May 6 11:11:54.503: INFO: Waiting for pod pod-62378096-8f8a-11ea-b5fe-0242ac110017 to disappear May 6 11:11:54.507: INFO: Pod pod-62378096-8f8a-11ea-b5fe-0242ac110017 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 11:11:54.507: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-npv4m" for this suite. May 6 11:12:00.534: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 11:12:00.545: INFO: namespace: e2e-tests-emptydir-npv4m, resource: bindings, ignored listing per whitelist May 6 11:12:00.609: INFO: namespace e2e-tests-emptydir-npv4m deletion completed in 6.098537683s • [SLOW TEST:10.264 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 11:12:00.610: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating the pod May 6 11:12:05.359: INFO: Successfully updated pod "annotationupdate686225b1-8f8a-11ea-b5fe-0242ac110017" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 11:12:07.374: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-hq4tp" for this suite. May 6 11:12:29.392: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 11:12:29.438: INFO: namespace: e2e-tests-projected-hq4tp, resource: bindings, ignored listing per whitelist May 6 11:12:29.492: INFO: namespace e2e-tests-projected-hq4tp deletion completed in 22.113850452s • [SLOW TEST:28.882 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 11:12:29.492: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod liveness-exec in namespace e2e-tests-container-probe-5tb6p May 6 11:12:35.637: INFO: Started pod liveness-exec in namespace e2e-tests-container-probe-5tb6p STEP: checking the pod's current state and verifying that restartCount is present May 6 11:12:35.640: INFO: Initial restart count of pod liveness-exec is 0 May 6 11:13:30.006: INFO: Restart count of pod e2e-tests-container-probe-5tb6p/liveness-exec is now 1 (54.366225394s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 11:13:30.026: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-5tb6p" for this suite. May 6 11:13:36.063: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 11:13:36.080: INFO: namespace: e2e-tests-container-probe-5tb6p, resource: bindings, ignored listing per whitelist May 6 11:13:36.123: INFO: namespace e2e-tests-container-probe-5tb6p deletion completed in 6.075758895s • [SLOW TEST:66.631 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-storage] Projected downwardAPI should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 11:13:36.123: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin May 6 11:13:36.375: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a15a129a-8f8a-11ea-b5fe-0242ac110017" in namespace "e2e-tests-projected-kdgbs" to be "success or failure" May 6 11:13:36.413: INFO: Pod "downwardapi-volume-a15a129a-8f8a-11ea-b5fe-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 37.449994ms May 6 11:13:38.417: INFO: Pod "downwardapi-volume-a15a129a-8f8a-11ea-b5fe-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.041760643s May 6 11:13:40.420: INFO: Pod "downwardapi-volume-a15a129a-8f8a-11ea-b5fe-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.044444173s STEP: Saw pod success May 6 11:13:40.420: INFO: Pod "downwardapi-volume-a15a129a-8f8a-11ea-b5fe-0242ac110017" satisfied condition "success or failure" May 6 11:13:40.422: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-a15a129a-8f8a-11ea-b5fe-0242ac110017 container client-container: STEP: delete the pod May 6 11:13:40.447: INFO: Waiting for pod downwardapi-volume-a15a129a-8f8a-11ea-b5fe-0242ac110017 to disappear May 6 11:13:40.463: INFO: Pod downwardapi-volume-a15a129a-8f8a-11ea-b5fe-0242ac110017 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 11:13:40.463: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-kdgbs" for this suite. May 6 11:13:46.496: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 11:13:46.540: INFO: namespace: e2e-tests-projected-kdgbs, resource: bindings, ignored listing per whitelist May 6 11:13:46.571: INFO: namespace e2e-tests-projected-kdgbs deletion completed in 6.105915061s • [SLOW TEST:10.448 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 11:13:46.572: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: starting an echo server on multiple ports STEP: creating replication controller proxy-service-z5h5h in namespace e2e-tests-proxy-x4qjd I0506 11:13:46.739806 7 runners.go:184] Created replication controller with name: proxy-service-z5h5h, namespace: e2e-tests-proxy-x4qjd, replica count: 1 I0506 11:13:47.790305 7 runners.go:184] proxy-service-z5h5h Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0506 11:13:48.790541 7 runners.go:184] proxy-service-z5h5h Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0506 11:13:49.790802 7 runners.go:184] proxy-service-z5h5h Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0506 11:13:50.791066 7 runners.go:184] proxy-service-z5h5h Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0506 11:13:51.791309 7 runners.go:184] proxy-service-z5h5h Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0506 11:13:52.791530 7 runners.go:184] proxy-service-z5h5h Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0506 11:13:53.792040 7 runners.go:184] proxy-service-z5h5h Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0506 11:13:54.792220 7 runners.go:184] proxy-service-z5h5h Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 6 11:13:54.826: INFO: setup took 8.162024801s, starting test cases STEP: running 16 cases, 20 attempts per case, 320 total attempts May 6 11:13:54.830: INFO: (0) /api/v1/namespaces/e2e-tests-proxy-x4qjd/pods/proxy-service-z5h5h-wqr8n/proxy/: >> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward api env vars May 6 11:14:08.050: INFO: Waiting up to 5m0s for pod "downward-api-b43e3123-8f8a-11ea-b5fe-0242ac110017" in namespace "e2e-tests-downward-api-dnjsf" to be "success or failure" May 6 11:14:08.076: INFO: Pod "downward-api-b43e3123-8f8a-11ea-b5fe-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 25.400268ms May 6 11:14:10.080: INFO: Pod "downward-api-b43e3123-8f8a-11ea-b5fe-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029565392s May 6 11:14:12.084: INFO: Pod "downward-api-b43e3123-8f8a-11ea-b5fe-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.033571566s STEP: Saw pod success May 6 11:14:12.084: INFO: Pod "downward-api-b43e3123-8f8a-11ea-b5fe-0242ac110017" satisfied condition "success or failure" May 6 11:14:12.087: INFO: Trying to get logs from node hunter-worker pod downward-api-b43e3123-8f8a-11ea-b5fe-0242ac110017 container dapi-container: STEP: delete the pod May 6 11:14:12.138: INFO: Waiting for pod downward-api-b43e3123-8f8a-11ea-b5fe-0242ac110017 to disappear May 6 11:14:12.141: INFO: Pod downward-api-b43e3123-8f8a-11ea-b5fe-0242ac110017 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 11:14:12.141: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-dnjsf" for this suite. May 6 11:14:18.163: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 11:14:18.184: INFO: namespace: e2e-tests-downward-api-dnjsf, resource: bindings, ignored listing per whitelist May 6 11:14:18.239: INFO: namespace e2e-tests-downward-api-dnjsf deletion completed in 6.092886222s • [SLOW TEST:10.295 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38 should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 11:14:18.239: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name s-test-opt-del-ba60f5fb-8f8a-11ea-b5fe-0242ac110017 STEP: Creating secret with name s-test-opt-upd-ba60f66a-8f8a-11ea-b5fe-0242ac110017 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-ba60f5fb-8f8a-11ea-b5fe-0242ac110017 STEP: Updating secret s-test-opt-upd-ba60f66a-8f8a-11ea-b5fe-0242ac110017 STEP: Creating secret with name s-test-opt-create-ba60f692-8f8a-11ea-b5fe-0242ac110017 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 11:15:47.858: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-pqknz" for this suite. May 6 11:16:11.876: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 11:16:11.917: INFO: namespace: e2e-tests-projected-pqknz, resource: bindings, ignored listing per whitelist May 6 11:16:11.957: INFO: namespace e2e-tests-projected-pqknz deletion completed in 24.095881526s • [SLOW TEST:113.718 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 11:16:11.958: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test env composition May 6 11:16:12.047: INFO: Waiting up to 5m0s for pod "var-expansion-fe24ed2b-8f8a-11ea-b5fe-0242ac110017" in namespace "e2e-tests-var-expansion-pj42l" to be "success or failure" May 6 11:16:12.075: INFO: Pod "var-expansion-fe24ed2b-8f8a-11ea-b5fe-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 27.260783ms May 6 11:16:14.093: INFO: Pod "var-expansion-fe24ed2b-8f8a-11ea-b5fe-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.045931838s May 6 11:16:16.116: INFO: Pod "var-expansion-fe24ed2b-8f8a-11ea-b5fe-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.068324557s STEP: Saw pod success May 6 11:16:16.116: INFO: Pod "var-expansion-fe24ed2b-8f8a-11ea-b5fe-0242ac110017" satisfied condition "success or failure" May 6 11:16:16.118: INFO: Trying to get logs from node hunter-worker2 pod var-expansion-fe24ed2b-8f8a-11ea-b5fe-0242ac110017 container dapi-container: STEP: delete the pod May 6 11:16:16.206: INFO: Waiting for pod var-expansion-fe24ed2b-8f8a-11ea-b5fe-0242ac110017 to disappear May 6 11:16:16.253: INFO: Pod var-expansion-fe24ed2b-8f8a-11ea-b5fe-0242ac110017 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 11:16:16.253: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-var-expansion-pj42l" for this suite. May 6 11:16:22.289: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 11:16:22.314: INFO: namespace: e2e-tests-var-expansion-pj42l, resource: bindings, ignored listing per whitelist May 6 11:16:22.352: INFO: namespace e2e-tests-var-expansion-pj42l deletion completed in 6.094948352s • [SLOW TEST:10.395 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 11:16:22.353: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on default medium should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir volume type on node default medium May 6 11:16:22.573: INFO: Waiting up to 5m0s for pod "pod-0461ca92-8f8b-11ea-b5fe-0242ac110017" in namespace "e2e-tests-emptydir-xr7n9" to be "success or failure" May 6 11:16:22.581: INFO: Pod "pod-0461ca92-8f8b-11ea-b5fe-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 7.764255ms May 6 11:16:24.727: INFO: Pod "pod-0461ca92-8f8b-11ea-b5fe-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.1540264s May 6 11:16:26.731: INFO: Pod "pod-0461ca92-8f8b-11ea-b5fe-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.157913503s STEP: Saw pod success May 6 11:16:26.731: INFO: Pod "pod-0461ca92-8f8b-11ea-b5fe-0242ac110017" satisfied condition "success or failure" May 6 11:16:26.734: INFO: Trying to get logs from node hunter-worker pod pod-0461ca92-8f8b-11ea-b5fe-0242ac110017 container test-container: STEP: delete the pod May 6 11:16:26.788: INFO: Waiting for pod pod-0461ca92-8f8b-11ea-b5fe-0242ac110017 to disappear May 6 11:16:26.796: INFO: Pod pod-0461ca92-8f8b-11ea-b5fe-0242ac110017 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 11:16:26.796: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-xr7n9" for this suite. May 6 11:16:32.812: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 11:16:32.830: INFO: namespace: e2e-tests-emptydir-xr7n9, resource: bindings, ignored listing per whitelist May 6 11:16:32.891: INFO: namespace e2e-tests-emptydir-xr7n9 deletion completed in 6.092026612s • [SLOW TEST:10.538 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 volume on default medium should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 11:16:32.891: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin May 6 11:16:33.060: INFO: Waiting up to 5m0s for pod "downwardapi-volume-0aa9a953-8f8b-11ea-b5fe-0242ac110017" in namespace "e2e-tests-projected-xq6t8" to be "success or failure" May 6 11:16:33.075: INFO: Pod "downwardapi-volume-0aa9a953-8f8b-11ea-b5fe-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 14.855443ms May 6 11:16:35.212: INFO: Pod "downwardapi-volume-0aa9a953-8f8b-11ea-b5fe-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.151945582s May 6 11:16:37.217: INFO: Pod "downwardapi-volume-0aa9a953-8f8b-11ea-b5fe-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.156244065s STEP: Saw pod success May 6 11:16:37.217: INFO: Pod "downwardapi-volume-0aa9a953-8f8b-11ea-b5fe-0242ac110017" satisfied condition "success or failure" May 6 11:16:37.219: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-0aa9a953-8f8b-11ea-b5fe-0242ac110017 container client-container: STEP: delete the pod May 6 11:16:37.244: INFO: Waiting for pod downwardapi-volume-0aa9a953-8f8b-11ea-b5fe-0242ac110017 to disappear May 6 11:16:37.254: INFO: Pod downwardapi-volume-0aa9a953-8f8b-11ea-b5fe-0242ac110017 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 11:16:37.254: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-xq6t8" for this suite. May 6 11:16:43.342: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 11:16:43.415: INFO: namespace: e2e-tests-projected-xq6t8, resource: bindings, ignored listing per whitelist May 6 11:16:43.417: INFO: namespace e2e-tests-projected-xq6t8 deletion completed in 6.160271226s • [SLOW TEST:10.526 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 11:16:43.418: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test override all May 6 11:16:43.561: INFO: Waiting up to 5m0s for pod "client-containers-10e8988f-8f8b-11ea-b5fe-0242ac110017" in namespace "e2e-tests-containers-8crcq" to be "success or failure" May 6 11:16:43.589: INFO: Pod "client-containers-10e8988f-8f8b-11ea-b5fe-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 27.073693ms May 6 11:16:45.593: INFO: Pod "client-containers-10e8988f-8f8b-11ea-b5fe-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031640125s May 6 11:16:47.597: INFO: Pod "client-containers-10e8988f-8f8b-11ea-b5fe-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.035582315s STEP: Saw pod success May 6 11:16:47.597: INFO: Pod "client-containers-10e8988f-8f8b-11ea-b5fe-0242ac110017" satisfied condition "success or failure" May 6 11:16:47.600: INFO: Trying to get logs from node hunter-worker2 pod client-containers-10e8988f-8f8b-11ea-b5fe-0242ac110017 container test-container: STEP: delete the pod May 6 11:16:47.619: INFO: Waiting for pod client-containers-10e8988f-8f8b-11ea-b5fe-0242ac110017 to disappear May 6 11:16:47.641: INFO: Pod client-containers-10e8988f-8f8b-11ea-b5fe-0242ac110017 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 11:16:47.641: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-containers-8crcq" for this suite. May 6 11:16:53.727: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 11:16:53.790: INFO: namespace: e2e-tests-containers-8crcq, resource: bindings, ignored listing per whitelist May 6 11:16:53.795: INFO: namespace e2e-tests-containers-8crcq deletion completed in 6.149672232s • [SLOW TEST:10.377 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 11:16:53.795: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43 [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod May 6 11:16:53.925: INFO: PodSpec: initContainers in spec.initContainers May 6 11:17:39.988: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-171d858e-8f8b-11ea-b5fe-0242ac110017", GenerateName:"", Namespace:"e2e-tests-init-container-kp2w9", SelfLink:"/api/v1/namespaces/e2e-tests-init-container-kp2w9/pods/pod-init-171d858e-8f8b-11ea-b5fe-0242ac110017", UID:"17508c3c-8f8b-11ea-99e8-0242ac110002", ResourceVersion:"9032516", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63724360614, loc:(*time.Location)(0x7950ac0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"925830831"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-c9gj5", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc0023c03c0), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-c9gj5", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-c9gj5", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-c9gj5", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc002562cb8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"hunter-worker2", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc000f43500), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc002562d40)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc002562d60)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc002562d68), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc002562d6c)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724360614, loc:(*time.Location)(0x7950ac0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724360614, loc:(*time.Location)(0x7950ac0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724360614, loc:(*time.Location)(0x7950ac0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724360614, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.17.0.4", PodIP:"10.244.2.117", StartTime:(*v1.Time)(0xc001adcf60), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(0xc001adcfa0), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc0023265b0)}, Ready:false, RestartCount:3, Image:"docker.io/library/busybox:1.29", ImageID:"docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"containerd://2eb494f4a480613e8aea411a9fe7730124c1cb16d597527e1ffef968563f50c7"}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc001adcfc0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:""}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc001adcf80), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:""}}, QOSClass:"Guaranteed"}} [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 11:17:39.989: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-init-container-kp2w9" for this suite. May 6 11:18:04.065: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 11:18:04.089: INFO: namespace: e2e-tests-init-container-kp2w9, resource: bindings, ignored listing per whitelist May 6 11:18:04.138: INFO: namespace e2e-tests-init-container-kp2w9 deletion completed in 24.085879057s • [SLOW TEST:70.343 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 11:18:04.139: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-412badef-8f8b-11ea-b5fe-0242ac110017 STEP: Creating a pod to test consume secrets May 6 11:18:04.522: INFO: Waiting up to 5m0s for pod "pod-secrets-412ce651-8f8b-11ea-b5fe-0242ac110017" in namespace "e2e-tests-secrets-ccx24" to be "success or failure" May 6 11:18:04.597: INFO: Pod "pod-secrets-412ce651-8f8b-11ea-b5fe-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 74.040239ms May 6 11:18:06.601: INFO: Pod "pod-secrets-412ce651-8f8b-11ea-b5fe-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.07892626s May 6 11:18:08.605: INFO: Pod "pod-secrets-412ce651-8f8b-11ea-b5fe-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.08292867s STEP: Saw pod success May 6 11:18:08.605: INFO: Pod "pod-secrets-412ce651-8f8b-11ea-b5fe-0242ac110017" satisfied condition "success or failure" May 6 11:18:08.609: INFO: Trying to get logs from node hunter-worker pod pod-secrets-412ce651-8f8b-11ea-b5fe-0242ac110017 container secret-volume-test: STEP: delete the pod May 6 11:18:08.633: INFO: Waiting for pod pod-secrets-412ce651-8f8b-11ea-b5fe-0242ac110017 to disappear May 6 11:18:08.674: INFO: Pod pod-secrets-412ce651-8f8b-11ea-b5fe-0242ac110017 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 11:18:08.674: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-ccx24" for this suite. May 6 11:18:14.705: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 11:18:14.752: INFO: namespace: e2e-tests-secrets-ccx24, resource: bindings, ignored listing per whitelist May 6 11:18:14.791: INFO: namespace e2e-tests-secrets-ccx24 deletion completed in 6.113316988s • [SLOW TEST:10.653 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-storage] Downward API volume should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 11:18:14.792: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin May 6 11:18:14.902: INFO: Waiting up to 5m0s for pod "downwardapi-volume-4760d281-8f8b-11ea-b5fe-0242ac110017" in namespace "e2e-tests-downward-api-2qw9d" to be "success or failure" May 6 11:18:14.907: INFO: Pod "downwardapi-volume-4760d281-8f8b-11ea-b5fe-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 4.595925ms May 6 11:18:16.910: INFO: Pod "downwardapi-volume-4760d281-8f8b-11ea-b5fe-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007502222s May 6 11:18:18.914: INFO: Pod "downwardapi-volume-4760d281-8f8b-11ea-b5fe-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011832089s STEP: Saw pod success May 6 11:18:18.914: INFO: Pod "downwardapi-volume-4760d281-8f8b-11ea-b5fe-0242ac110017" satisfied condition "success or failure" May 6 11:18:18.917: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-4760d281-8f8b-11ea-b5fe-0242ac110017 container client-container: STEP: delete the pod May 6 11:18:18.935: INFO: Waiting for pod downwardapi-volume-4760d281-8f8b-11ea-b5fe-0242ac110017 to disappear May 6 11:18:18.939: INFO: Pod downwardapi-volume-4760d281-8f8b-11ea-b5fe-0242ac110017 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 11:18:18.939: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-2qw9d" for this suite. May 6 11:18:24.967: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 11:18:25.007: INFO: namespace: e2e-tests-downward-api-2qw9d, resource: bindings, ignored listing per whitelist May 6 11:18:25.056: INFO: namespace e2e-tests-downward-api-2qw9d deletion completed in 6.114132819s • [SLOW TEST:10.265 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl replace should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 11:18:25.057: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1563 [It] should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine May 6 11:18:25.160: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --labels=run=e2e-test-nginx-pod --namespace=e2e-tests-kubectl-h9wgc' May 6 11:18:25.264: INFO: stderr: "" May 6 11:18:25.264: INFO: stdout: "pod/e2e-test-nginx-pod created\n" STEP: verifying the pod e2e-test-nginx-pod is running STEP: verifying the pod e2e-test-nginx-pod was created May 6 11:18:30.315: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod e2e-test-nginx-pod --namespace=e2e-tests-kubectl-h9wgc -o json' May 6 11:18:30.420: INFO: stderr: "" May 6 11:18:30.420: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"creationTimestamp\": \"2020-05-06T11:18:25Z\",\n \"labels\": {\n \"run\": \"e2e-test-nginx-pod\"\n },\n \"name\": \"e2e-test-nginx-pod\",\n \"namespace\": \"e2e-tests-kubectl-h9wgc\",\n \"resourceVersion\": \"9032686\",\n \"selfLink\": \"/api/v1/namespaces/e2e-tests-kubectl-h9wgc/pods/e2e-test-nginx-pod\",\n \"uid\": \"4d8cad35-8f8b-11ea-99e8-0242ac110002\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"docker.io/library/nginx:1.14-alpine\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-nginx-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"default-token-dcxt9\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"hunter-worker\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"default-token-dcxt9\",\n \"secret\": {\n \"defaultMode\": 420,\n \"secretName\": \"default-token-dcxt9\"\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-05-06T11:18:25Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-05-06T11:18:27Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-05-06T11:18:27Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-05-06T11:18:25Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"containerd://88b008e548a5709a4fbfdbd71aa7c61ebf3ff95ceb4f86cf684a7665ccc54430\",\n \"image\": \"docker.io/library/nginx:1.14-alpine\",\n \"imageID\": \"docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7\",\n \"lastState\": {},\n \"name\": \"e2e-test-nginx-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2020-05-06T11:18:27Z\"\n }\n }\n }\n ],\n \"hostIP\": \"172.17.0.3\",\n \"phase\": \"Running\",\n \"podIP\": \"10.244.1.135\",\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2020-05-06T11:18:25Z\"\n }\n}\n" STEP: replace the image in the pod May 6 11:18:30.421: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config replace -f - --namespace=e2e-tests-kubectl-h9wgc' May 6 11:18:30.695: INFO: stderr: "" May 6 11:18:30.695: INFO: stdout: "pod/e2e-test-nginx-pod replaced\n" STEP: verifying the pod e2e-test-nginx-pod has the right image docker.io/library/busybox:1.29 [AfterEach] [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1568 May 6 11:18:30.698: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=e2e-tests-kubectl-h9wgc' May 6 11:18:34.023: INFO: stderr: "" May 6 11:18:34.024: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 11:18:34.024: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-h9wgc" for this suite. May 6 11:18:40.205: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 11:18:40.236: INFO: namespace: e2e-tests-kubectl-h9wgc, resource: bindings, ignored listing per whitelist May 6 11:18:40.288: INFO: namespace e2e-tests-kubectl-h9wgc deletion completed in 6.169189124s • [SLOW TEST:15.231 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 11:18:40.288: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-569917a2-8f8b-11ea-b5fe-0242ac110017 STEP: Creating a pod to test consume secrets May 6 11:18:40.503: INFO: Waiting up to 5m0s for pod "pod-secrets-56a2b362-8f8b-11ea-b5fe-0242ac110017" in namespace "e2e-tests-secrets-2q2w4" to be "success or failure" May 6 11:18:40.520: INFO: Pod "pod-secrets-56a2b362-8f8b-11ea-b5fe-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 16.727435ms May 6 11:18:42.660: INFO: Pod "pod-secrets-56a2b362-8f8b-11ea-b5fe-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.157459371s May 6 11:18:44.665: INFO: Pod "pod-secrets-56a2b362-8f8b-11ea-b5fe-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.161946971s STEP: Saw pod success May 6 11:18:44.665: INFO: Pod "pod-secrets-56a2b362-8f8b-11ea-b5fe-0242ac110017" satisfied condition "success or failure" May 6 11:18:44.668: INFO: Trying to get logs from node hunter-worker2 pod pod-secrets-56a2b362-8f8b-11ea-b5fe-0242ac110017 container secret-volume-test: STEP: delete the pod May 6 11:18:44.760: INFO: Waiting for pod pod-secrets-56a2b362-8f8b-11ea-b5fe-0242ac110017 to disappear May 6 11:18:44.795: INFO: Pod pod-secrets-56a2b362-8f8b-11ea-b5fe-0242ac110017 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 11:18:44.795: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-2q2w4" for this suite. May 6 11:18:50.815: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 11:18:50.873: INFO: namespace: e2e-tests-secrets-2q2w4, resource: bindings, ignored listing per whitelist May 6 11:18:50.904: INFO: namespace e2e-tests-secrets-2q2w4 deletion completed in 6.101126112s STEP: Destroying namespace "e2e-tests-secret-namespace-dsxz7" for this suite. May 6 11:18:56.935: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 11:18:56.998: INFO: namespace: e2e-tests-secret-namespace-dsxz7, resource: bindings, ignored listing per whitelist May 6 11:18:57.011: INFO: namespace e2e-tests-secret-namespace-dsxz7 deletion completed in 6.106665873s • [SLOW TEST:16.723 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 11:18:57.011: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test override command May 6 11:18:57.144: INFO: Waiting up to 5m0s for pod "client-containers-608e50c7-8f8b-11ea-b5fe-0242ac110017" in namespace "e2e-tests-containers-8fvpn" to be "success or failure" May 6 11:18:57.168: INFO: Pod "client-containers-608e50c7-8f8b-11ea-b5fe-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 23.577672ms May 6 11:18:59.172: INFO: Pod "client-containers-608e50c7-8f8b-11ea-b5fe-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027323743s May 6 11:19:01.176: INFO: Pod "client-containers-608e50c7-8f8b-11ea-b5fe-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.031260276s STEP: Saw pod success May 6 11:19:01.176: INFO: Pod "client-containers-608e50c7-8f8b-11ea-b5fe-0242ac110017" satisfied condition "success or failure" May 6 11:19:01.179: INFO: Trying to get logs from node hunter-worker pod client-containers-608e50c7-8f8b-11ea-b5fe-0242ac110017 container test-container: STEP: delete the pod May 6 11:19:01.213: INFO: Waiting for pod client-containers-608e50c7-8f8b-11ea-b5fe-0242ac110017 to disappear May 6 11:19:01.232: INFO: Pod client-containers-608e50c7-8f8b-11ea-b5fe-0242ac110017 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 11:19:01.232: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-containers-8fvpn" for this suite. May 6 11:19:07.250: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 11:19:07.274: INFO: namespace: e2e-tests-containers-8fvpn, resource: bindings, ignored listing per whitelist May 6 11:19:07.323: INFO: namespace e2e-tests-containers-8fvpn deletion completed in 6.087909157s • [SLOW TEST:10.312 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 11:19:07.323: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with secret that has name projected-secret-test-66b2c68c-8f8b-11ea-b5fe-0242ac110017 STEP: Creating a pod to test consume secrets May 6 11:19:07.455: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-66b36f5d-8f8b-11ea-b5fe-0242ac110017" in namespace "e2e-tests-projected-gr4z6" to be "success or failure" May 6 11:19:07.476: INFO: Pod "pod-projected-secrets-66b36f5d-8f8b-11ea-b5fe-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 20.934054ms May 6 11:19:09.480: INFO: Pod "pod-projected-secrets-66b36f5d-8f8b-11ea-b5fe-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024854302s May 6 11:19:11.484: INFO: Pod "pod-projected-secrets-66b36f5d-8f8b-11ea-b5fe-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.029198274s STEP: Saw pod success May 6 11:19:11.484: INFO: Pod "pod-projected-secrets-66b36f5d-8f8b-11ea-b5fe-0242ac110017" satisfied condition "success or failure" May 6 11:19:11.488: INFO: Trying to get logs from node hunter-worker2 pod pod-projected-secrets-66b36f5d-8f8b-11ea-b5fe-0242ac110017 container projected-secret-volume-test: STEP: delete the pod May 6 11:19:11.509: INFO: Waiting for pod pod-projected-secrets-66b36f5d-8f8b-11ea-b5fe-0242ac110017 to disappear May 6 11:19:11.513: INFO: Pod pod-projected-secrets-66b36f5d-8f8b-11ea-b5fe-0242ac110017 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 11:19:11.513: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-gr4z6" for this suite. May 6 11:19:17.545: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 11:19:17.669: INFO: namespace: e2e-tests-projected-gr4z6, resource: bindings, ignored listing per whitelist May 6 11:19:17.674: INFO: namespace e2e-tests-projected-gr4z6 deletion completed in 6.157504079s • [SLOW TEST:10.350 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 11:19:17.674: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-6cdf380c-8f8b-11ea-b5fe-0242ac110017 STEP: Creating a pod to test consume configMaps May 6 11:19:17.821: INFO: Waiting up to 5m0s for pod "pod-configmaps-6ce114a2-8f8b-11ea-b5fe-0242ac110017" in namespace "e2e-tests-configmap-npqjl" to be "success or failure" May 6 11:19:17.849: INFO: Pod "pod-configmaps-6ce114a2-8f8b-11ea-b5fe-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 28.066329ms May 6 11:19:19.854: INFO: Pod "pod-configmaps-6ce114a2-8f8b-11ea-b5fe-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032576242s May 6 11:19:21.858: INFO: Pod "pod-configmaps-6ce114a2-8f8b-11ea-b5fe-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.037030577s STEP: Saw pod success May 6 11:19:21.859: INFO: Pod "pod-configmaps-6ce114a2-8f8b-11ea-b5fe-0242ac110017" satisfied condition "success or failure" May 6 11:19:21.861: INFO: Trying to get logs from node hunter-worker pod pod-configmaps-6ce114a2-8f8b-11ea-b5fe-0242ac110017 container configmap-volume-test: STEP: delete the pod May 6 11:19:21.882: INFO: Waiting for pod pod-configmaps-6ce114a2-8f8b-11ea-b5fe-0242ac110017 to disappear May 6 11:19:21.887: INFO: Pod pod-configmaps-6ce114a2-8f8b-11ea-b5fe-0242ac110017 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 11:19:21.887: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-npqjl" for this suite. May 6 11:19:27.902: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 11:19:27.970: INFO: namespace: e2e-tests-configmap-npqjl, resource: bindings, ignored listing per whitelist May 6 11:19:27.978: INFO: namespace e2e-tests-configmap-npqjl deletion completed in 6.087791441s • [SLOW TEST:10.304 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 11:19:27.978: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with configmap pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod pod-subpath-test-configmap-gwpx STEP: Creating a pod to test atomic-volume-subpath May 6 11:19:28.110: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-gwpx" in namespace "e2e-tests-subpath-s9cm5" to be "success or failure" May 6 11:19:28.127: INFO: Pod "pod-subpath-test-configmap-gwpx": Phase="Pending", Reason="", readiness=false. Elapsed: 17.291286ms May 6 11:19:30.131: INFO: Pod "pod-subpath-test-configmap-gwpx": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021290241s May 6 11:19:32.135: INFO: Pod "pod-subpath-test-configmap-gwpx": Phase="Pending", Reason="", readiness=false. Elapsed: 4.025508988s May 6 11:19:34.462: INFO: Pod "pod-subpath-test-configmap-gwpx": Phase="Pending", Reason="", readiness=false. Elapsed: 6.351700999s May 6 11:19:36.466: INFO: Pod "pod-subpath-test-configmap-gwpx": Phase="Running", Reason="", readiness=false. Elapsed: 8.35607821s May 6 11:19:38.471: INFO: Pod "pod-subpath-test-configmap-gwpx": Phase="Running", Reason="", readiness=false. Elapsed: 10.360906832s May 6 11:19:40.478: INFO: Pod "pod-subpath-test-configmap-gwpx": Phase="Running", Reason="", readiness=false. Elapsed: 12.368485827s May 6 11:19:42.498: INFO: Pod "pod-subpath-test-configmap-gwpx": Phase="Running", Reason="", readiness=false. Elapsed: 14.388506321s May 6 11:19:44.503: INFO: Pod "pod-subpath-test-configmap-gwpx": Phase="Running", Reason="", readiness=false. Elapsed: 16.392713092s May 6 11:19:46.506: INFO: Pod "pod-subpath-test-configmap-gwpx": Phase="Running", Reason="", readiness=false. Elapsed: 18.396502083s May 6 11:19:48.511: INFO: Pod "pod-subpath-test-configmap-gwpx": Phase="Running", Reason="", readiness=false. Elapsed: 20.400717379s May 6 11:19:50.515: INFO: Pod "pod-subpath-test-configmap-gwpx": Phase="Running", Reason="", readiness=false. Elapsed: 22.404689793s May 6 11:19:52.557: INFO: Pod "pod-subpath-test-configmap-gwpx": Phase="Running", Reason="", readiness=false. Elapsed: 24.44727392s May 6 11:19:54.561: INFO: Pod "pod-subpath-test-configmap-gwpx": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.451508325s STEP: Saw pod success May 6 11:19:54.562: INFO: Pod "pod-subpath-test-configmap-gwpx" satisfied condition "success or failure" May 6 11:19:54.565: INFO: Trying to get logs from node hunter-worker2 pod pod-subpath-test-configmap-gwpx container test-container-subpath-configmap-gwpx: STEP: delete the pod May 6 11:19:54.633: INFO: Waiting for pod pod-subpath-test-configmap-gwpx to disappear May 6 11:19:54.730: INFO: Pod pod-subpath-test-configmap-gwpx no longer exists STEP: Deleting pod pod-subpath-test-configmap-gwpx May 6 11:19:54.730: INFO: Deleting pod "pod-subpath-test-configmap-gwpx" in namespace "e2e-tests-subpath-s9cm5" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 11:19:54.733: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-subpath-s9cm5" for this suite. May 6 11:20:00.753: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 11:20:00.763: INFO: namespace: e2e-tests-subpath-s9cm5, resource: bindings, ignored listing per whitelist May 6 11:20:00.827: INFO: namespace e2e-tests-subpath-s9cm5 deletion completed in 6.090783286s • [SLOW TEST:32.849 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with configmap pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 11:20:00.827: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap e2e-tests-configmap-9shgh/configmap-test-8690bc6f-8f8b-11ea-b5fe-0242ac110017 STEP: Creating a pod to test consume configMaps May 6 11:20:00.944: INFO: Waiting up to 5m0s for pod "pod-configmaps-86938d11-8f8b-11ea-b5fe-0242ac110017" in namespace "e2e-tests-configmap-9shgh" to be "success or failure" May 6 11:20:00.948: INFO: Pod "pod-configmaps-86938d11-8f8b-11ea-b5fe-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 3.50312ms May 6 11:20:02.952: INFO: Pod "pod-configmaps-86938d11-8f8b-11ea-b5fe-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007605075s May 6 11:20:04.959: INFO: Pod "pod-configmaps-86938d11-8f8b-11ea-b5fe-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.014667646s STEP: Saw pod success May 6 11:20:04.959: INFO: Pod "pod-configmaps-86938d11-8f8b-11ea-b5fe-0242ac110017" satisfied condition "success or failure" May 6 11:20:04.962: INFO: Trying to get logs from node hunter-worker2 pod pod-configmaps-86938d11-8f8b-11ea-b5fe-0242ac110017 container env-test: STEP: delete the pod May 6 11:20:05.000: INFO: Waiting for pod pod-configmaps-86938d11-8f8b-11ea-b5fe-0242ac110017 to disappear May 6 11:20:05.026: INFO: Pod pod-configmaps-86938d11-8f8b-11ea-b5fe-0242ac110017 no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 11:20:05.026: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-9shgh" for this suite. May 6 11:20:11.048: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 11:20:11.056: INFO: namespace: e2e-tests-configmap-9shgh, resource: bindings, ignored listing per whitelist May 6 11:20:11.120: INFO: namespace e2e-tests-configmap-9shgh deletion completed in 6.090420512s • [SLOW TEST:10.293 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 11:20:11.121: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin May 6 11:20:11.242: INFO: Waiting up to 5m0s for pod "downwardapi-volume-8cb6bdf1-8f8b-11ea-b5fe-0242ac110017" in namespace "e2e-tests-projected-9h8rm" to be "success or failure" May 6 11:20:11.246: INFO: Pod "downwardapi-volume-8cb6bdf1-8f8b-11ea-b5fe-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 3.199767ms May 6 11:20:13.250: INFO: Pod "downwardapi-volume-8cb6bdf1-8f8b-11ea-b5fe-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007532115s May 6 11:20:15.254: INFO: Pod "downwardapi-volume-8cb6bdf1-8f8b-11ea-b5fe-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011419922s STEP: Saw pod success May 6 11:20:15.254: INFO: Pod "downwardapi-volume-8cb6bdf1-8f8b-11ea-b5fe-0242ac110017" satisfied condition "success or failure" May 6 11:20:15.257: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-8cb6bdf1-8f8b-11ea-b5fe-0242ac110017 container client-container: STEP: delete the pod May 6 11:20:15.276: INFO: Waiting for pod downwardapi-volume-8cb6bdf1-8f8b-11ea-b5fe-0242ac110017 to disappear May 6 11:20:15.281: INFO: Pod downwardapi-volume-8cb6bdf1-8f8b-11ea-b5fe-0242ac110017 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 11:20:15.281: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-9h8rm" for this suite. May 6 11:20:21.297: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 11:20:21.315: INFO: namespace: e2e-tests-projected-9h8rm, resource: bindings, ignored listing per whitelist May 6 11:20:21.360: INFO: namespace e2e-tests-projected-9h8rm deletion completed in 6.07621915s • [SLOW TEST:10.240 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 11:20:21.360: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating the pod May 6 11:20:26.111: INFO: Successfully updated pod "annotationupdate92d6d77b-8f8b-11ea-b5fe-0242ac110017" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 11:20:28.174: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-qctjk" for this suite. May 6 11:20:50.197: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 11:20:50.265: INFO: namespace: e2e-tests-downward-api-qctjk, resource: bindings, ignored listing per whitelist May 6 11:20:50.270: INFO: namespace e2e-tests-downward-api-qctjk deletion completed in 22.087995156s • [SLOW TEST:28.910 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 11:20:50.271: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod May 6 11:20:55.491: INFO: Successfully updated pod "pod-update-a43d2d6b-8f8b-11ea-b5fe-0242ac110017" STEP: verifying the updated pod is in kubernetes May 6 11:20:55.502: INFO: Pod update OK [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 11:20:55.502: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-hm94z" for this suite. May 6 11:21:19.798: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 11:21:19.985: INFO: namespace: e2e-tests-pods-hm94z, resource: bindings, ignored listing per whitelist May 6 11:21:19.990: INFO: namespace e2e-tests-pods-hm94z deletion completed in 24.485785753s • [SLOW TEST:29.720 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 11:21:19.990: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-map-b5c93ce8-8f8b-11ea-b5fe-0242ac110017 STEP: Creating a pod to test consume configMaps May 6 11:21:20.161: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-b5cbbb2d-8f8b-11ea-b5fe-0242ac110017" in namespace "e2e-tests-projected-x54ww" to be "success or failure" May 6 11:21:20.194: INFO: Pod "pod-projected-configmaps-b5cbbb2d-8f8b-11ea-b5fe-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 32.739135ms May 6 11:21:22.246: INFO: Pod "pod-projected-configmaps-b5cbbb2d-8f8b-11ea-b5fe-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.085400726s May 6 11:21:24.250: INFO: Pod "pod-projected-configmaps-b5cbbb2d-8f8b-11ea-b5fe-0242ac110017": Phase="Running", Reason="", readiness=true. Elapsed: 4.089354601s May 6 11:21:26.254: INFO: Pod "pod-projected-configmaps-b5cbbb2d-8f8b-11ea-b5fe-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.093007563s STEP: Saw pod success May 6 11:21:26.254: INFO: Pod "pod-projected-configmaps-b5cbbb2d-8f8b-11ea-b5fe-0242ac110017" satisfied condition "success or failure" May 6 11:21:26.259: INFO: Trying to get logs from node hunter-worker2 pod pod-projected-configmaps-b5cbbb2d-8f8b-11ea-b5fe-0242ac110017 container projected-configmap-volume-test: STEP: delete the pod May 6 11:21:26.272: INFO: Waiting for pod pod-projected-configmaps-b5cbbb2d-8f8b-11ea-b5fe-0242ac110017 to disappear May 6 11:21:26.277: INFO: Pod pod-projected-configmaps-b5cbbb2d-8f8b-11ea-b5fe-0242ac110017 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 11:21:26.277: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-x54ww" for this suite. May 6 11:21:32.293: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 11:21:32.497: INFO: namespace: e2e-tests-projected-x54ww, resource: bindings, ignored listing per whitelist May 6 11:21:32.513: INFO: namespace e2e-tests-projected-x54ww deletion completed in 6.233331515s • [SLOW TEST:12.523 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 11:21:32.514: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with configMap that has name projected-configmap-test-upd-bd894afb-8f8b-11ea-b5fe-0242ac110017 STEP: Creating the pod STEP: Updating configmap projected-configmap-test-upd-bd894afb-8f8b-11ea-b5fe-0242ac110017 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 11:21:39.287: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-kz4j9" for this suite. May 6 11:22:01.319: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 11:22:01.350: INFO: namespace: e2e-tests-projected-kz4j9, resource: bindings, ignored listing per whitelist May 6 11:22:01.393: INFO: namespace e2e-tests-projected-kz4j9 deletion completed in 22.102268994s • [SLOW TEST:28.880 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 11:22:01.394: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin May 6 11:22:01.631: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ce84e7ff-8f8b-11ea-b5fe-0242ac110017" in namespace "e2e-tests-downward-api-ggzqp" to be "success or failure" May 6 11:22:01.680: INFO: Pod "downwardapi-volume-ce84e7ff-8f8b-11ea-b5fe-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 48.614913ms May 6 11:22:03.685: INFO: Pod "downwardapi-volume-ce84e7ff-8f8b-11ea-b5fe-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.053728059s May 6 11:22:05.689: INFO: Pod "downwardapi-volume-ce84e7ff-8f8b-11ea-b5fe-0242ac110017": Phase="Running", Reason="", readiness=true. Elapsed: 4.05754865s May 6 11:22:07.694: INFO: Pod "downwardapi-volume-ce84e7ff-8f8b-11ea-b5fe-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.062567821s STEP: Saw pod success May 6 11:22:07.694: INFO: Pod "downwardapi-volume-ce84e7ff-8f8b-11ea-b5fe-0242ac110017" satisfied condition "success or failure" May 6 11:22:07.697: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-ce84e7ff-8f8b-11ea-b5fe-0242ac110017 container client-container: STEP: delete the pod May 6 11:22:07.723: INFO: Waiting for pod downwardapi-volume-ce84e7ff-8f8b-11ea-b5fe-0242ac110017 to disappear May 6 11:22:07.780: INFO: Pod downwardapi-volume-ce84e7ff-8f8b-11ea-b5fe-0242ac110017 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 11:22:07.780: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-ggzqp" for this suite. May 6 11:22:15.873: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 11:22:15.963: INFO: namespace: e2e-tests-downward-api-ggzqp, resource: bindings, ignored listing per whitelist May 6 11:22:15.970: INFO: namespace e2e-tests-downward-api-ggzqp deletion completed in 8.185896423s • [SLOW TEST:14.576 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 11:22:15.970: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-d7201dbc-8f8b-11ea-b5fe-0242ac110017 STEP: Creating a pod to test consume configMaps May 6 11:22:16.152: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-d723b1bb-8f8b-11ea-b5fe-0242ac110017" in namespace "e2e-tests-projected-wnvw6" to be "success or failure" May 6 11:22:16.154: INFO: Pod "pod-projected-configmaps-d723b1bb-8f8b-11ea-b5fe-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.683122ms May 6 11:22:18.170: INFO: Pod "pod-projected-configmaps-d723b1bb-8f8b-11ea-b5fe-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018300448s May 6 11:22:20.174: INFO: Pod "pod-projected-configmaps-d723b1bb-8f8b-11ea-b5fe-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 4.02241113s May 6 11:22:22.178: INFO: Pod "pod-projected-configmaps-d723b1bb-8f8b-11ea-b5fe-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.026133726s STEP: Saw pod success May 6 11:22:22.178: INFO: Pod "pod-projected-configmaps-d723b1bb-8f8b-11ea-b5fe-0242ac110017" satisfied condition "success or failure" May 6 11:22:22.180: INFO: Trying to get logs from node hunter-worker pod pod-projected-configmaps-d723b1bb-8f8b-11ea-b5fe-0242ac110017 container projected-configmap-volume-test: STEP: delete the pod May 6 11:22:22.203: INFO: Waiting for pod pod-projected-configmaps-d723b1bb-8f8b-11ea-b5fe-0242ac110017 to disappear May 6 11:22:22.214: INFO: Pod pod-projected-configmaps-d723b1bb-8f8b-11ea-b5fe-0242ac110017 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 11:22:22.214: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-wnvw6" for this suite. May 6 11:22:28.236: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 11:22:28.253: INFO: namespace: e2e-tests-projected-wnvw6, resource: bindings, ignored listing per whitelist May 6 11:22:28.311: INFO: namespace e2e-tests-projected-wnvw6 deletion completed in 6.095661094s • [SLOW TEST:12.341 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 11:22:28.312: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0666 on tmpfs May 6 11:22:28.563: INFO: Waiting up to 5m0s for pod "pod-de90a3a7-8f8b-11ea-b5fe-0242ac110017" in namespace "e2e-tests-emptydir-6vnhn" to be "success or failure" May 6 11:22:28.583: INFO: Pod "pod-de90a3a7-8f8b-11ea-b5fe-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 19.907506ms May 6 11:22:30.673: INFO: Pod "pod-de90a3a7-8f8b-11ea-b5fe-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.109931645s May 6 11:22:32.709: INFO: Pod "pod-de90a3a7-8f8b-11ea-b5fe-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.14572271s STEP: Saw pod success May 6 11:22:32.709: INFO: Pod "pod-de90a3a7-8f8b-11ea-b5fe-0242ac110017" satisfied condition "success or failure" May 6 11:22:32.711: INFO: Trying to get logs from node hunter-worker pod pod-de90a3a7-8f8b-11ea-b5fe-0242ac110017 container test-container: STEP: delete the pod May 6 11:22:32.770: INFO: Waiting for pod pod-de90a3a7-8f8b-11ea-b5fe-0242ac110017 to disappear May 6 11:22:32.788: INFO: Pod pod-de90a3a7-8f8b-11ea-b5fe-0242ac110017 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 11:22:32.788: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-6vnhn" for this suite. May 6 11:22:38.810: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 11:22:38.837: INFO: namespace: e2e-tests-emptydir-6vnhn, resource: bindings, ignored listing per whitelist May 6 11:22:38.886: INFO: namespace e2e-tests-emptydir-6vnhn deletion completed in 6.094214533s • [SLOW TEST:10.574 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 11:22:38.886: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 6 11:22:43.146: INFO: Waiting up to 5m0s for pod "client-envvars-e73e1f5f-8f8b-11ea-b5fe-0242ac110017" in namespace "e2e-tests-pods-8vm2j" to be "success or failure" May 6 11:22:43.157: INFO: Pod "client-envvars-e73e1f5f-8f8b-11ea-b5fe-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 11.157591ms May 6 11:22:45.224: INFO: Pod "client-envvars-e73e1f5f-8f8b-11ea-b5fe-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.077978396s May 6 11:22:47.228: INFO: Pod "client-envvars-e73e1f5f-8f8b-11ea-b5fe-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 4.081568002s May 6 11:22:49.231: INFO: Pod "client-envvars-e73e1f5f-8f8b-11ea-b5fe-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.085496748s STEP: Saw pod success May 6 11:22:49.232: INFO: Pod "client-envvars-e73e1f5f-8f8b-11ea-b5fe-0242ac110017" satisfied condition "success or failure" May 6 11:22:49.234: INFO: Trying to get logs from node hunter-worker pod client-envvars-e73e1f5f-8f8b-11ea-b5fe-0242ac110017 container env3cont: STEP: delete the pod May 6 11:22:49.256: INFO: Waiting for pod client-envvars-e73e1f5f-8f8b-11ea-b5fe-0242ac110017 to disappear May 6 11:22:49.262: INFO: Pod client-envvars-e73e1f5f-8f8b-11ea-b5fe-0242ac110017 no longer exists [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 11:22:49.262: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-8vm2j" for this suite. May 6 11:23:33.282: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 11:23:33.393: INFO: namespace: e2e-tests-pods-8vm2j, resource: bindings, ignored listing per whitelist May 6 11:23:33.398: INFO: namespace e2e-tests-pods-8vm2j deletion completed in 44.132750014s • [SLOW TEST:54.512 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 11:23:33.398: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0666 on node default medium May 6 11:23:33.574: INFO: Waiting up to 5m0s for pod "pod-054f9329-8f8c-11ea-b5fe-0242ac110017" in namespace "e2e-tests-emptydir-b7d9g" to be "success or failure" May 6 11:23:33.597: INFO: Pod "pod-054f9329-8f8c-11ea-b5fe-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 22.087026ms May 6 11:23:35.674: INFO: Pod "pod-054f9329-8f8c-11ea-b5fe-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.099895915s May 6 11:23:37.678: INFO: Pod "pod-054f9329-8f8c-11ea-b5fe-0242ac110017": Phase="Running", Reason="", readiness=true. Elapsed: 4.103620712s May 6 11:23:39.682: INFO: Pod "pod-054f9329-8f8c-11ea-b5fe-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.10743987s STEP: Saw pod success May 6 11:23:39.682: INFO: Pod "pod-054f9329-8f8c-11ea-b5fe-0242ac110017" satisfied condition "success or failure" May 6 11:23:39.684: INFO: Trying to get logs from node hunter-worker pod pod-054f9329-8f8c-11ea-b5fe-0242ac110017 container test-container: STEP: delete the pod May 6 11:23:39.788: INFO: Waiting for pod pod-054f9329-8f8c-11ea-b5fe-0242ac110017 to disappear May 6 11:23:39.796: INFO: Pod pod-054f9329-8f8c-11ea-b5fe-0242ac110017 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 11:23:39.796: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-b7d9g" for this suite. May 6 11:23:45.824: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 11:23:45.888: INFO: namespace: e2e-tests-emptydir-b7d9g, resource: bindings, ignored listing per whitelist May 6 11:23:45.892: INFO: namespace e2e-tests-emptydir-b7d9g deletion completed in 6.091931006s • [SLOW TEST:12.494 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 11:23:45.892: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Given a Pod with a 'name' label pod-adoption is created STEP: When a replication controller with a matching selector is created STEP: Then the orphan pod is adopted [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 11:23:54.247: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-replication-controller-wgjbp" for this suite. May 6 11:24:18.292: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 11:24:18.313: INFO: namespace: e2e-tests-replication-controller-wgjbp, resource: bindings, ignored listing per whitelist May 6 11:24:18.364: INFO: namespace e2e-tests-replication-controller-wgjbp deletion completed in 24.113056797s • [SLOW TEST:32.472 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 11:24:18.365: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward api env vars May 6 11:24:18.524: INFO: Waiting up to 5m0s for pod "downward-api-201bdafd-8f8c-11ea-b5fe-0242ac110017" in namespace "e2e-tests-downward-api-tvtr9" to be "success or failure" May 6 11:24:18.540: INFO: Pod "downward-api-201bdafd-8f8c-11ea-b5fe-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 16.015687ms May 6 11:24:20.544: INFO: Pod "downward-api-201bdafd-8f8c-11ea-b5fe-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019988779s May 6 11:24:22.549: INFO: Pod "downward-api-201bdafd-8f8c-11ea-b5fe-0242ac110017": Phase="Running", Reason="", readiness=true. Elapsed: 4.024862142s May 6 11:24:24.553: INFO: Pod "downward-api-201bdafd-8f8c-11ea-b5fe-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.029724267s STEP: Saw pod success May 6 11:24:24.553: INFO: Pod "downward-api-201bdafd-8f8c-11ea-b5fe-0242ac110017" satisfied condition "success or failure" May 6 11:24:24.556: INFO: Trying to get logs from node hunter-worker pod downward-api-201bdafd-8f8c-11ea-b5fe-0242ac110017 container dapi-container: STEP: delete the pod May 6 11:24:24.610: INFO: Waiting for pod downward-api-201bdafd-8f8c-11ea-b5fe-0242ac110017 to disappear May 6 11:24:24.618: INFO: Pod downward-api-201bdafd-8f8c-11ea-b5fe-0242ac110017 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 11:24:24.618: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-tvtr9" for this suite. May 6 11:24:30.633: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 11:24:30.746: INFO: namespace: e2e-tests-downward-api-tvtr9, resource: bindings, ignored listing per whitelist May 6 11:24:30.765: INFO: namespace e2e-tests-downward-api-tvtr9 deletion completed in 6.14440356s • [SLOW TEST:12.400 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38 should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 11:24:30.766: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74 STEP: Creating service test in namespace e2e-tests-statefulset-dfg6w [It] Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating stateful set ss in namespace e2e-tests-statefulset-dfg6w STEP: Waiting until all stateful set ss replicas will be running in namespace e2e-tests-statefulset-dfg6w May 6 11:24:31.133: INFO: Found 0 stateful pods, waiting for 1 May 6 11:24:41.137: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod May 6 11:24:41.140: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-dfg6w ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' May 6 11:24:41.448: INFO: stderr: "I0506 11:24:41.257965 954 log.go:172] (0xc00074a210) (0xc000135400) Create stream\nI0506 11:24:41.258024 954 log.go:172] (0xc00074a210) (0xc000135400) Stream added, broadcasting: 1\nI0506 11:24:41.260037 954 log.go:172] (0xc00074a210) Reply frame received for 1\nI0506 11:24:41.260072 954 log.go:172] (0xc00074a210) (0xc0001354a0) Create stream\nI0506 11:24:41.260083 954 log.go:172] (0xc00074a210) (0xc0001354a0) Stream added, broadcasting: 3\nI0506 11:24:41.260788 954 log.go:172] (0xc00074a210) Reply frame received for 3\nI0506 11:24:41.260824 954 log.go:172] (0xc00074a210) (0xc000748000) Create stream\nI0506 11:24:41.260835 954 log.go:172] (0xc00074a210) (0xc000748000) Stream added, broadcasting: 5\nI0506 11:24:41.261787 954 log.go:172] (0xc00074a210) Reply frame received for 5\nI0506 11:24:41.442120 954 log.go:172] (0xc00074a210) Data frame received for 5\nI0506 11:24:41.442164 954 log.go:172] (0xc000748000) (5) Data frame handling\nI0506 11:24:41.442210 954 log.go:172] (0xc00074a210) Data frame received for 3\nI0506 11:24:41.442267 954 log.go:172] (0xc0001354a0) (3) Data frame handling\nI0506 11:24:41.442296 954 log.go:172] (0xc0001354a0) (3) Data frame sent\nI0506 11:24:41.442315 954 log.go:172] (0xc00074a210) Data frame received for 3\nI0506 11:24:41.442328 954 log.go:172] (0xc0001354a0) (3) Data frame handling\nI0506 11:24:41.444470 954 log.go:172] (0xc00074a210) Data frame received for 1\nI0506 11:24:41.444491 954 log.go:172] (0xc000135400) (1) Data frame handling\nI0506 11:24:41.444507 954 log.go:172] (0xc000135400) (1) Data frame sent\nI0506 11:24:41.444691 954 log.go:172] (0xc00074a210) (0xc000135400) Stream removed, broadcasting: 1\nI0506 11:24:41.444798 954 log.go:172] (0xc00074a210) Go away received\nI0506 11:24:41.444853 954 log.go:172] (0xc00074a210) (0xc000135400) Stream removed, broadcasting: 1\nI0506 11:24:41.444882 954 log.go:172] (0xc00074a210) (0xc0001354a0) Stream removed, broadcasting: 3\nI0506 11:24:41.444897 954 log.go:172] (0xc00074a210) (0xc000748000) Stream removed, broadcasting: 5\n" May 6 11:24:41.448: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" May 6 11:24:41.448: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' May 6 11:24:41.452: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true May 6 11:24:51.457: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false May 6 11:24:51.457: INFO: Waiting for statefulset status.replicas updated to 0 May 6 11:24:51.487: INFO: POD NODE PHASE GRACE CONDITIONS May 6 11:24:51.487: INFO: ss-0 hunter-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 11:24:31 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-06 11:24:41 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-06 11:24:41 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 11:24:31 +0000 UTC }] May 6 11:24:51.487: INFO: May 6 11:24:51.487: INFO: StatefulSet ss has not reached scale 3, at 1 May 6 11:24:52.492: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.979638402s May 6 11:24:53.496: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.974399689s May 6 11:24:54.501: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.969944797s May 6 11:24:55.506: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.96530545s May 6 11:24:56.523: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.960191066s May 6 11:24:57.529: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.943386534s May 6 11:24:58.574: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.937332626s May 6 11:24:59.580: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.892063766s May 6 11:25:00.586: INFO: Verifying statefulset ss doesn't scale past 3 for another 886.246787ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace e2e-tests-statefulset-dfg6w May 6 11:25:01.605: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-dfg6w ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 6 11:25:02.104: INFO: stderr: "I0506 11:25:02.029841 977 log.go:172] (0xc0008042c0) (0xc00071c640) Create stream\nI0506 11:25:02.029902 977 log.go:172] (0xc0008042c0) (0xc00071c640) Stream added, broadcasting: 1\nI0506 11:25:02.031959 977 log.go:172] (0xc0008042c0) Reply frame received for 1\nI0506 11:25:02.032004 977 log.go:172] (0xc0008042c0) (0xc00071c6e0) Create stream\nI0506 11:25:02.032023 977 log.go:172] (0xc0008042c0) (0xc00071c6e0) Stream added, broadcasting: 3\nI0506 11:25:02.032779 977 log.go:172] (0xc0008042c0) Reply frame received for 3\nI0506 11:25:02.032804 977 log.go:172] (0xc0008042c0) (0xc0007d2be0) Create stream\nI0506 11:25:02.032816 977 log.go:172] (0xc0008042c0) (0xc0007d2be0) Stream added, broadcasting: 5\nI0506 11:25:02.033966 977 log.go:172] (0xc0008042c0) Reply frame received for 5\nI0506 11:25:02.098187 977 log.go:172] (0xc0008042c0) Data frame received for 5\nI0506 11:25:02.098226 977 log.go:172] (0xc0007d2be0) (5) Data frame handling\nI0506 11:25:02.098248 977 log.go:172] (0xc0008042c0) Data frame received for 3\nI0506 11:25:02.098268 977 log.go:172] (0xc00071c6e0) (3) Data frame handling\nI0506 11:25:02.098287 977 log.go:172] (0xc00071c6e0) (3) Data frame sent\nI0506 11:25:02.098298 977 log.go:172] (0xc0008042c0) Data frame received for 3\nI0506 11:25:02.098321 977 log.go:172] (0xc00071c6e0) (3) Data frame handling\nI0506 11:25:02.099763 977 log.go:172] (0xc0008042c0) Data frame received for 1\nI0506 11:25:02.099779 977 log.go:172] (0xc00071c640) (1) Data frame handling\nI0506 11:25:02.099787 977 log.go:172] (0xc00071c640) (1) Data frame sent\nI0506 11:25:02.099800 977 log.go:172] (0xc0008042c0) (0xc00071c640) Stream removed, broadcasting: 1\nI0506 11:25:02.099841 977 log.go:172] (0xc0008042c0) Go away received\nI0506 11:25:02.099928 977 log.go:172] (0xc0008042c0) (0xc00071c640) Stream removed, broadcasting: 1\nI0506 11:25:02.099942 977 log.go:172] (0xc0008042c0) (0xc00071c6e0) Stream removed, broadcasting: 3\nI0506 11:25:02.099951 977 log.go:172] (0xc0008042c0) (0xc0007d2be0) Stream removed, broadcasting: 5\n" May 6 11:25:02.104: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" May 6 11:25:02.104: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' May 6 11:25:02.104: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-dfg6w ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 6 11:25:02.295: INFO: stderr: "I0506 11:25:02.228425 998 log.go:172] (0xc000138790) (0xc0007a7220) Create stream\nI0506 11:25:02.228490 998 log.go:172] (0xc000138790) (0xc0007a7220) Stream added, broadcasting: 1\nI0506 11:25:02.230921 998 log.go:172] (0xc000138790) Reply frame received for 1\nI0506 11:25:02.230968 998 log.go:172] (0xc000138790) (0xc000724000) Create stream\nI0506 11:25:02.230984 998 log.go:172] (0xc000138790) (0xc000724000) Stream added, broadcasting: 3\nI0506 11:25:02.231929 998 log.go:172] (0xc000138790) Reply frame received for 3\nI0506 11:25:02.231968 998 log.go:172] (0xc000138790) (0xc0003e2000) Create stream\nI0506 11:25:02.231982 998 log.go:172] (0xc000138790) (0xc0003e2000) Stream added, broadcasting: 5\nI0506 11:25:02.232777 998 log.go:172] (0xc000138790) Reply frame received for 5\nI0506 11:25:02.289064 998 log.go:172] (0xc000138790) Data frame received for 3\nI0506 11:25:02.289308 998 log.go:172] (0xc000724000) (3) Data frame handling\nI0506 11:25:02.289330 998 log.go:172] (0xc000724000) (3) Data frame sent\nI0506 11:25:02.289342 998 log.go:172] (0xc000138790) Data frame received for 3\nI0506 11:25:02.289352 998 log.go:172] (0xc000724000) (3) Data frame handling\nI0506 11:25:02.289393 998 log.go:172] (0xc000138790) Data frame received for 5\nI0506 11:25:02.289418 998 log.go:172] (0xc0003e2000) (5) Data frame handling\nI0506 11:25:02.289444 998 log.go:172] (0xc0003e2000) (5) Data frame sent\nI0506 11:25:02.289458 998 log.go:172] (0xc000138790) Data frame received for 5\nI0506 11:25:02.289477 998 log.go:172] (0xc0003e2000) (5) Data frame handling\nmv: can't rename '/tmp/index.html': No such file or directory\nI0506 11:25:02.291206 998 log.go:172] (0xc000138790) Data frame received for 1\nI0506 11:25:02.291232 998 log.go:172] (0xc0007a7220) (1) Data frame handling\nI0506 11:25:02.291242 998 log.go:172] (0xc0007a7220) (1) Data frame sent\nI0506 11:25:02.291259 998 log.go:172] (0xc000138790) (0xc0007a7220) Stream removed, broadcasting: 1\nI0506 11:25:02.291272 998 log.go:172] (0xc000138790) Go away received\nI0506 11:25:02.291551 998 log.go:172] (0xc000138790) (0xc0007a7220) Stream removed, broadcasting: 1\nI0506 11:25:02.291578 998 log.go:172] (0xc000138790) (0xc000724000) Stream removed, broadcasting: 3\nI0506 11:25:02.291601 998 log.go:172] (0xc000138790) (0xc0003e2000) Stream removed, broadcasting: 5\n" May 6 11:25:02.295: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" May 6 11:25:02.295: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' May 6 11:25:02.295: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-dfg6w ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 6 11:25:02.947: INFO: stderr: "I0506 11:25:02.879479 1020 log.go:172] (0xc000202420) (0xc000121360) Create stream\nI0506 11:25:02.879536 1020 log.go:172] (0xc000202420) (0xc000121360) Stream added, broadcasting: 1\nI0506 11:25:02.882227 1020 log.go:172] (0xc000202420) Reply frame received for 1\nI0506 11:25:02.882292 1020 log.go:172] (0xc000202420) (0xc00072c000) Create stream\nI0506 11:25:02.882315 1020 log.go:172] (0xc000202420) (0xc00072c000) Stream added, broadcasting: 3\nI0506 11:25:02.883317 1020 log.go:172] (0xc000202420) Reply frame received for 3\nI0506 11:25:02.883379 1020 log.go:172] (0xc000202420) (0xc0003b8000) Create stream\nI0506 11:25:02.883403 1020 log.go:172] (0xc000202420) (0xc0003b8000) Stream added, broadcasting: 5\nI0506 11:25:02.884342 1020 log.go:172] (0xc000202420) Reply frame received for 5\nI0506 11:25:02.941085 1020 log.go:172] (0xc000202420) Data frame received for 5\nI0506 11:25:02.941254 1020 log.go:172] (0xc0003b8000) (5) Data frame handling\nI0506 11:25:02.941268 1020 log.go:172] (0xc0003b8000) (5) Data frame sent\nmv: can't rename '/tmp/index.html': No such file or directory\nI0506 11:25:02.941282 1020 log.go:172] (0xc000202420) Data frame received for 3\nI0506 11:25:02.941288 1020 log.go:172] (0xc00072c000) (3) Data frame handling\nI0506 11:25:02.941295 1020 log.go:172] (0xc00072c000) (3) Data frame sent\nI0506 11:25:02.941494 1020 log.go:172] (0xc000202420) Data frame received for 5\nI0506 11:25:02.941521 1020 log.go:172] (0xc0003b8000) (5) Data frame handling\nI0506 11:25:02.941538 1020 log.go:172] (0xc000202420) Data frame received for 3\nI0506 11:25:02.941544 1020 log.go:172] (0xc00072c000) (3) Data frame handling\nI0506 11:25:02.943399 1020 log.go:172] (0xc000202420) Data frame received for 1\nI0506 11:25:02.943422 1020 log.go:172] (0xc000121360) (1) Data frame handling\nI0506 11:25:02.943439 1020 log.go:172] (0xc000121360) (1) Data frame sent\nI0506 11:25:02.943449 1020 log.go:172] (0xc000202420) (0xc000121360) Stream removed, broadcasting: 1\nI0506 11:25:02.943459 1020 log.go:172] (0xc000202420) Go away received\nI0506 11:25:02.943663 1020 log.go:172] (0xc000202420) (0xc000121360) Stream removed, broadcasting: 1\nI0506 11:25:02.943678 1020 log.go:172] (0xc000202420) (0xc00072c000) Stream removed, broadcasting: 3\nI0506 11:25:02.943685 1020 log.go:172] (0xc000202420) (0xc0003b8000) Stream removed, broadcasting: 5\n" May 6 11:25:02.947: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" May 6 11:25:02.947: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' May 6 11:25:02.951: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true May 6 11:25:02.951: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true May 6 11:25:02.951: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Scale down will not halt with unhealthy stateful pod May 6 11:25:02.954: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-dfg6w ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' May 6 11:25:03.162: INFO: stderr: "I0506 11:25:03.107576 1043 log.go:172] (0xc0006b0420) (0xc000417220) Create stream\nI0506 11:25:03.107643 1043 log.go:172] (0xc0006b0420) (0xc000417220) Stream added, broadcasting: 1\nI0506 11:25:03.109820 1043 log.go:172] (0xc0006b0420) Reply frame received for 1\nI0506 11:25:03.109851 1043 log.go:172] (0xc0006b0420) (0xc000746000) Create stream\nI0506 11:25:03.109859 1043 log.go:172] (0xc0006b0420) (0xc000746000) Stream added, broadcasting: 3\nI0506 11:25:03.110567 1043 log.go:172] (0xc0006b0420) Reply frame received for 3\nI0506 11:25:03.110619 1043 log.go:172] (0xc0006b0420) (0xc0004172c0) Create stream\nI0506 11:25:03.110636 1043 log.go:172] (0xc0006b0420) (0xc0004172c0) Stream added, broadcasting: 5\nI0506 11:25:03.111833 1043 log.go:172] (0xc0006b0420) Reply frame received for 5\nI0506 11:25:03.156163 1043 log.go:172] (0xc0006b0420) Data frame received for 5\nI0506 11:25:03.156189 1043 log.go:172] (0xc0004172c0) (5) Data frame handling\nI0506 11:25:03.156206 1043 log.go:172] (0xc0006b0420) Data frame received for 3\nI0506 11:25:03.156211 1043 log.go:172] (0xc000746000) (3) Data frame handling\nI0506 11:25:03.156218 1043 log.go:172] (0xc000746000) (3) Data frame sent\nI0506 11:25:03.156223 1043 log.go:172] (0xc0006b0420) Data frame received for 3\nI0506 11:25:03.156226 1043 log.go:172] (0xc000746000) (3) Data frame handling\nI0506 11:25:03.157774 1043 log.go:172] (0xc0006b0420) Data frame received for 1\nI0506 11:25:03.157786 1043 log.go:172] (0xc000417220) (1) Data frame handling\nI0506 11:25:03.157792 1043 log.go:172] (0xc000417220) (1) Data frame sent\nI0506 11:25:03.157862 1043 log.go:172] (0xc0006b0420) (0xc000417220) Stream removed, broadcasting: 1\nI0506 11:25:03.157904 1043 log.go:172] (0xc0006b0420) Go away received\nI0506 11:25:03.158147 1043 log.go:172] (0xc0006b0420) (0xc000417220) Stream removed, broadcasting: 1\nI0506 11:25:03.158173 1043 log.go:172] (0xc0006b0420) (0xc000746000) Stream removed, broadcasting: 3\nI0506 11:25:03.158184 1043 log.go:172] (0xc0006b0420) (0xc0004172c0) Stream removed, broadcasting: 5\n" May 6 11:25:03.162: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" May 6 11:25:03.162: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' May 6 11:25:03.162: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-dfg6w ss-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' May 6 11:25:03.387: INFO: stderr: "I0506 11:25:03.292782 1065 log.go:172] (0xc000138840) (0xc000786640) Create stream\nI0506 11:25:03.292835 1065 log.go:172] (0xc000138840) (0xc000786640) Stream added, broadcasting: 1\nI0506 11:25:03.294638 1065 log.go:172] (0xc000138840) Reply frame received for 1\nI0506 11:25:03.294678 1065 log.go:172] (0xc000138840) (0xc0006a8c80) Create stream\nI0506 11:25:03.294691 1065 log.go:172] (0xc000138840) (0xc0006a8c80) Stream added, broadcasting: 3\nI0506 11:25:03.295309 1065 log.go:172] (0xc000138840) Reply frame received for 3\nI0506 11:25:03.295334 1065 log.go:172] (0xc000138840) (0xc00069c000) Create stream\nI0506 11:25:03.295341 1065 log.go:172] (0xc000138840) (0xc00069c000) Stream added, broadcasting: 5\nI0506 11:25:03.295877 1065 log.go:172] (0xc000138840) Reply frame received for 5\nI0506 11:25:03.381616 1065 log.go:172] (0xc000138840) Data frame received for 3\nI0506 11:25:03.381667 1065 log.go:172] (0xc0006a8c80) (3) Data frame handling\nI0506 11:25:03.381676 1065 log.go:172] (0xc0006a8c80) (3) Data frame sent\nI0506 11:25:03.381681 1065 log.go:172] (0xc000138840) Data frame received for 3\nI0506 11:25:03.381689 1065 log.go:172] (0xc0006a8c80) (3) Data frame handling\nI0506 11:25:03.381737 1065 log.go:172] (0xc000138840) Data frame received for 5\nI0506 11:25:03.381884 1065 log.go:172] (0xc00069c000) (5) Data frame handling\nI0506 11:25:03.383320 1065 log.go:172] (0xc000138840) Data frame received for 1\nI0506 11:25:03.383340 1065 log.go:172] (0xc000786640) (1) Data frame handling\nI0506 11:25:03.383349 1065 log.go:172] (0xc000786640) (1) Data frame sent\nI0506 11:25:03.383365 1065 log.go:172] (0xc000138840) (0xc000786640) Stream removed, broadcasting: 1\nI0506 11:25:03.383539 1065 log.go:172] (0xc000138840) (0xc000786640) Stream removed, broadcasting: 1\nI0506 11:25:03.383554 1065 log.go:172] (0xc000138840) (0xc0006a8c80) Stream removed, broadcasting: 3\nI0506 11:25:03.383563 1065 log.go:172] (0xc000138840) (0xc00069c000) Stream removed, broadcasting: 5\n" May 6 11:25:03.387: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" May 6 11:25:03.387: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' May 6 11:25:03.388: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-dfg6w ss-2 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' May 6 11:25:03.652: INFO: stderr: "I0506 11:25:03.558507 1089 log.go:172] (0xc00015c840) (0xc0005f52c0) Create stream\nI0506 11:25:03.558565 1089 log.go:172] (0xc00015c840) (0xc0005f52c0) Stream added, broadcasting: 1\nI0506 11:25:03.560578 1089 log.go:172] (0xc00015c840) Reply frame received for 1\nI0506 11:25:03.560621 1089 log.go:172] (0xc00015c840) (0xc0005f5360) Create stream\nI0506 11:25:03.560629 1089 log.go:172] (0xc00015c840) (0xc0005f5360) Stream added, broadcasting: 3\nI0506 11:25:03.561601 1089 log.go:172] (0xc00015c840) Reply frame received for 3\nI0506 11:25:03.561647 1089 log.go:172] (0xc00015c840) (0xc000372000) Create stream\nI0506 11:25:03.561661 1089 log.go:172] (0xc00015c840) (0xc000372000) Stream added, broadcasting: 5\nI0506 11:25:03.562472 1089 log.go:172] (0xc00015c840) Reply frame received for 5\nI0506 11:25:03.645584 1089 log.go:172] (0xc00015c840) Data frame received for 3\nI0506 11:25:03.645624 1089 log.go:172] (0xc0005f5360) (3) Data frame handling\nI0506 11:25:03.645643 1089 log.go:172] (0xc0005f5360) (3) Data frame sent\nI0506 11:25:03.646000 1089 log.go:172] (0xc00015c840) Data frame received for 3\nI0506 11:25:03.646020 1089 log.go:172] (0xc0005f5360) (3) Data frame handling\nI0506 11:25:03.646127 1089 log.go:172] (0xc00015c840) Data frame received for 5\nI0506 11:25:03.646167 1089 log.go:172] (0xc000372000) (5) Data frame handling\nI0506 11:25:03.648004 1089 log.go:172] (0xc00015c840) Data frame received for 1\nI0506 11:25:03.648022 1089 log.go:172] (0xc0005f52c0) (1) Data frame handling\nI0506 11:25:03.648032 1089 log.go:172] (0xc0005f52c0) (1) Data frame sent\nI0506 11:25:03.648042 1089 log.go:172] (0xc00015c840) (0xc0005f52c0) Stream removed, broadcasting: 1\nI0506 11:25:03.648229 1089 log.go:172] (0xc00015c840) (0xc0005f52c0) Stream removed, broadcasting: 1\nI0506 11:25:03.648243 1089 log.go:172] (0xc00015c840) (0xc0005f5360) Stream removed, broadcasting: 3\nI0506 11:25:03.648250 1089 log.go:172] (0xc00015c840) (0xc000372000) Stream removed, broadcasting: 5\n" May 6 11:25:03.652: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" May 6 11:25:03.652: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' May 6 11:25:03.652: INFO: Waiting for statefulset status.replicas updated to 0 May 6 11:25:03.675: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 1 May 6 11:25:14.048: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false May 6 11:25:14.048: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false May 6 11:25:14.048: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false May 6 11:25:14.520: INFO: POD NODE PHASE GRACE CONDITIONS May 6 11:25:14.520: INFO: ss-0 hunter-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 11:24:31 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-06 11:25:03 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-06 11:25:03 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 11:24:31 +0000 UTC }] May 6 11:25:14.520: INFO: ss-1 hunter-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 11:24:51 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-06 11:25:03 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-06 11:25:03 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 11:24:51 +0000 UTC }] May 6 11:25:14.521: INFO: ss-2 hunter-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 11:24:51 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-06 11:25:04 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-06 11:25:04 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 11:24:51 +0000 UTC }] May 6 11:25:14.521: INFO: May 6 11:25:14.521: INFO: StatefulSet ss has not reached scale 0, at 3 May 6 11:25:15.977: INFO: POD NODE PHASE GRACE CONDITIONS May 6 11:25:15.977: INFO: ss-0 hunter-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 11:24:31 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-06 11:25:03 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-06 11:25:03 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 11:24:31 +0000 UTC }] May 6 11:25:15.977: INFO: ss-1 hunter-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 11:24:51 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-06 11:25:03 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-06 11:25:03 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 11:24:51 +0000 UTC }] May 6 11:25:15.977: INFO: ss-2 hunter-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 11:24:51 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-06 11:25:04 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-06 11:25:04 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 11:24:51 +0000 UTC }] May 6 11:25:15.977: INFO: May 6 11:25:15.977: INFO: StatefulSet ss has not reached scale 0, at 3 May 6 11:25:17.586: INFO: POD NODE PHASE GRACE CONDITIONS May 6 11:25:17.586: INFO: ss-0 hunter-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 11:24:31 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-06 11:25:03 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-06 11:25:03 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 11:24:31 +0000 UTC }] May 6 11:25:17.586: INFO: ss-1 hunter-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 11:24:51 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-06 11:25:03 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-06 11:25:03 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 11:24:51 +0000 UTC }] May 6 11:25:17.587: INFO: ss-2 hunter-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 11:24:51 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-06 11:25:04 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-06 11:25:04 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 11:24:51 +0000 UTC }] May 6 11:25:17.587: INFO: May 6 11:25:17.587: INFO: StatefulSet ss has not reached scale 0, at 3 May 6 11:25:18.782: INFO: POD NODE PHASE GRACE CONDITIONS May 6 11:25:18.782: INFO: ss-0 hunter-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 11:24:31 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-06 11:25:03 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-06 11:25:03 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 11:24:31 +0000 UTC }] May 6 11:25:18.782: INFO: ss-1 hunter-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 11:24:51 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-06 11:25:03 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-06 11:25:03 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 11:24:51 +0000 UTC }] May 6 11:25:18.782: INFO: ss-2 hunter-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 11:24:51 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-06 11:25:04 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-06 11:25:04 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 11:24:51 +0000 UTC }] May 6 11:25:18.782: INFO: May 6 11:25:18.782: INFO: StatefulSet ss has not reached scale 0, at 3 May 6 11:25:19.832: INFO: POD NODE PHASE GRACE CONDITIONS May 6 11:25:19.832: INFO: ss-0 hunter-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 11:24:31 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-06 11:25:03 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-06 11:25:03 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 11:24:31 +0000 UTC }] May 6 11:25:19.832: INFO: ss-1 hunter-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 11:24:51 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-06 11:25:03 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-06 11:25:03 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 11:24:51 +0000 UTC }] May 6 11:25:19.832: INFO: ss-2 hunter-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 11:24:51 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-06 11:25:04 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-06 11:25:04 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 11:24:51 +0000 UTC }] May 6 11:25:19.832: INFO: May 6 11:25:19.832: INFO: StatefulSet ss has not reached scale 0, at 3 May 6 11:25:20.838: INFO: POD NODE PHASE GRACE CONDITIONS May 6 11:25:20.838: INFO: ss-0 hunter-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 11:24:31 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-06 11:25:03 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-06 11:25:03 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 11:24:31 +0000 UTC }] May 6 11:25:20.838: INFO: ss-1 hunter-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 11:24:51 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-06 11:25:03 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-06 11:25:03 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 11:24:51 +0000 UTC }] May 6 11:25:20.838: INFO: ss-2 hunter-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 11:24:51 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-06 11:25:04 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-06 11:25:04 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 11:24:51 +0000 UTC }] May 6 11:25:20.838: INFO: May 6 11:25:20.838: INFO: StatefulSet ss has not reached scale 0, at 3 May 6 11:25:21.897: INFO: POD NODE PHASE GRACE CONDITIONS May 6 11:25:21.898: INFO: ss-0 hunter-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 11:24:31 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-06 11:25:03 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-06 11:25:03 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 11:24:31 +0000 UTC }] May 6 11:25:21.898: INFO: ss-1 hunter-worker2 Pending 0s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 11:24:51 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-06 11:25:03 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-06 11:25:03 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 11:24:51 +0000 UTC }] May 6 11:25:21.898: INFO: ss-2 hunter-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 11:24:51 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-06 11:25:04 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-06 11:25:04 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 11:24:51 +0000 UTC }] May 6 11:25:21.898: INFO: May 6 11:25:21.898: INFO: StatefulSet ss has not reached scale 0, at 3 May 6 11:25:22.903: INFO: POD NODE PHASE GRACE CONDITIONS May 6 11:25:22.903: INFO: ss-0 hunter-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 11:24:31 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-06 11:25:03 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-06 11:25:03 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 11:24:31 +0000 UTC }] May 6 11:25:22.903: INFO: ss-2 hunter-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 11:24:51 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-06 11:25:04 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-06 11:25:04 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 11:24:51 +0000 UTC }] May 6 11:25:22.903: INFO: May 6 11:25:22.903: INFO: StatefulSet ss has not reached scale 0, at 2 May 6 11:25:23.907: INFO: POD NODE PHASE GRACE CONDITIONS May 6 11:25:23.907: INFO: ss-0 hunter-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 11:24:31 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-06 11:25:03 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-06 11:25:03 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 11:24:31 +0000 UTC }] May 6 11:25:23.907: INFO: ss-2 hunter-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 11:24:51 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-06 11:25:04 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-06 11:25:04 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 11:24:51 +0000 UTC }] May 6 11:25:23.907: INFO: May 6 11:25:23.907: INFO: StatefulSet ss has not reached scale 0, at 2 STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacee2e-tests-statefulset-dfg6w May 6 11:25:24.911: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-dfg6w ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 6 11:25:25.037: INFO: rc: 1 May 6 11:25:25.037: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-dfg6w ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] error: unable to upgrade connection: container not found ("nginx") [] 0xc0014f5e30 exit status 1 true [0xc0004b5c18 0xc0004b5c58 0xc0004b5c70] [0xc0004b5c18 0xc0004b5c58 0xc0004b5c70] [0xc0004b5c40 0xc0004b5c68] [0x935700 0x935700] 0xc0020207e0 }: Command stdout: stderr: error: unable to upgrade connection: container not found ("nginx") error: exit status 1 May 6 11:25:35.038: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-dfg6w ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 6 11:25:35.140: INFO: rc: 1 May 6 11:25:35.140: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-dfg6w ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc000eec420 exit status 1 true [0xc0018580c0 0xc0018580d8 0xc0018580f0] [0xc0018580c0 0xc0018580d8 0xc0018580f0] [0xc0018580d0 0xc0018580e8] [0x935700 0x935700] 0xc001d6a720 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 6 11:25:45.140: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-dfg6w ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 6 11:25:45.226: INFO: rc: 1 May 6 11:25:45.226: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-dfg6w ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0019be120 exit status 1 true [0xc00016e000 0xc000554230 0xc000554748] [0xc00016e000 0xc000554230 0xc000554748] [0xc000554178 0xc000554700] [0x935700 0x935700] 0xc00203e2a0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 6 11:25:55.226: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-dfg6w ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 6 11:25:55.311: INFO: rc: 1 May 6 11:25:55.311: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-dfg6w ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001f12120 exit status 1 true [0xc000510118 0xc00114c190 0xc00114c320] [0xc000510118 0xc00114c190 0xc00114c320] [0xc00114c168 0xc00114c2c0] [0x935700 0x935700] 0xc0025c8720 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 6 11:26:05.311: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-dfg6w ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 6 11:26:05.503: INFO: rc: 1 May 6 11:26:05.503: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-dfg6w ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001e1e150 exit status 1 true [0xc001ec4000 0xc001ec4018 0xc001ec4030] [0xc001ec4000 0xc001ec4018 0xc001ec4030] [0xc001ec4010 0xc001ec4028] [0x935700 0x935700] 0xc0024c61e0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 6 11:26:15.504: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-dfg6w ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 6 11:26:15.596: INFO: rc: 1 May 6 11:26:15.596: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-dfg6w ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0019be270 exit status 1 true [0xc000554d20 0xc000554e50 0xc000555008] [0xc000554d20 0xc000554e50 0xc000555008] [0xc000554df8 0xc000554fd8] [0x935700 0x935700] 0xc00203e540 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 6 11:26:25.596: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-dfg6w ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 6 11:26:25.685: INFO: rc: 1 May 6 11:26:25.685: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-dfg6w ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0013d8120 exit status 1 true [0xc002386000 0xc002386018 0xc002386030] [0xc002386000 0xc002386018 0xc002386030] [0xc002386010 0xc002386028] [0x935700 0x935700] 0xc00177cea0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 6 11:26:35.685: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-dfg6w ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 6 11:26:35.778: INFO: rc: 1 May 6 11:26:35.778: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-dfg6w ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0013d8240 exit status 1 true [0xc002386038 0xc002386050 0xc002386068] [0xc002386038 0xc002386050 0xc002386068] [0xc002386048 0xc002386060] [0x935700 0x935700] 0xc00177d140 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 6 11:26:45.778: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-dfg6w ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 6 11:26:45.853: INFO: rc: 1 May 6 11:26:45.853: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-dfg6w ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0013d8360 exit status 1 true [0xc002386070 0xc002386088 0xc0023860a0] [0xc002386070 0xc002386088 0xc0023860a0] [0xc002386080 0xc002386098] [0x935700 0x935700] 0xc00177d3e0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 6 11:26:55.853: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-dfg6w ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 6 11:26:55.940: INFO: rc: 1 May 6 11:26:55.940: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-dfg6w ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0013d8480 exit status 1 true [0xc0023860a8 0xc0023860c0 0xc0023860d8] [0xc0023860a8 0xc0023860c0 0xc0023860d8] [0xc0023860b8 0xc0023860d0] [0x935700 0x935700] 0xc00177de60 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 6 11:27:05.940: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-dfg6w ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 6 11:27:06.027: INFO: rc: 1 May 6 11:27:06.027: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-dfg6w ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0013d85a0 exit status 1 true [0xc0023860e0 0xc0023860f8 0xc002386110] [0xc0023860e0 0xc0023860f8 0xc002386110] [0xc0023860f0 0xc002386108] [0x935700 0x935700] 0xc00213e120 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 6 11:27:16.028: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-dfg6w ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 6 11:27:16.171: INFO: rc: 1 May 6 11:27:16.171: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-dfg6w ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0013d86c0 exit status 1 true [0xc002386118 0xc002386130 0xc002386148] [0xc002386118 0xc002386130 0xc002386148] [0xc002386128 0xc002386140] [0x935700 0x935700] 0xc00213e3c0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 6 11:27:26.171: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-dfg6w ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 6 11:27:26.262: INFO: rc: 1 May 6 11:27:26.262: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-dfg6w ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0019be420 exit status 1 true [0xc000555020 0xc000555090 0xc0005551d0] [0xc000555020 0xc000555090 0xc0005551d0] [0xc000555080 0xc000555158] [0x935700 0x935700] 0xc00203e7e0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 6 11:27:36.263: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-dfg6w ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 6 11:27:36.372: INFO: rc: 1 May 6 11:27:36.372: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-dfg6w ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0020a40f0 exit status 1 true [0xc00186c020 0xc00186c038 0xc00186c050] [0xc00186c020 0xc00186c038 0xc00186c050] [0xc00186c030 0xc00186c048] [0x935700 0x935700] 0xc001eb61e0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 6 11:27:46.372: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-dfg6w ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 6 11:27:46.504: INFO: rc: 1 May 6 11:27:46.504: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-dfg6w ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001f12150 exit status 1 true [0xc000510118 0xc00114c168 0xc00114c2c0] [0xc000510118 0xc00114c168 0xc00114c2c0] [0xc00114c018 0xc00114c248] [0x935700 0x935700] 0xc00177cea0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 6 11:27:56.504: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-dfg6w ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 6 11:27:56.601: INFO: rc: 1 May 6 11:27:56.601: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-dfg6w ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0013d8150 exit status 1 true [0xc00186c068 0xc00186c0a8 0xc00186c0c0] [0xc00186c068 0xc00186c0a8 0xc00186c0c0] [0xc00186c090 0xc00186c0b8] [0x935700 0x935700] 0xc0025c8720 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 6 11:28:06.602: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-dfg6w ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 6 11:28:06.693: INFO: rc: 1 May 6 11:28:06.694: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-dfg6w ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0020a42a0 exit status 1 true [0xc002386000 0xc002386018 0xc002386030] [0xc002386000 0xc002386018 0xc002386030] [0xc002386010 0xc002386028] [0x935700 0x935700] 0xc001eb6480 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 6 11:28:16.694: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-dfg6w ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 6 11:28:16.788: INFO: rc: 1 May 6 11:28:16.788: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-dfg6w ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0020a43c0 exit status 1 true [0xc002386038 0xc002386050 0xc002386068] [0xc002386038 0xc002386050 0xc002386068] [0xc002386048 0xc002386060] [0x935700 0x935700] 0xc001eb6720 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 6 11:28:26.788: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-dfg6w ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 6 11:28:26.876: INFO: rc: 1 May 6 11:28:26.877: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-dfg6w ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001e1e120 exit status 1 true [0xc001ec4000 0xc001ec4018 0xc001ec4030] [0xc001ec4000 0xc001ec4018 0xc001ec4030] [0xc001ec4010 0xc001ec4028] [0x935700 0x935700] 0xc00213e1e0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 6 11:28:36.877: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-dfg6w ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 6 11:28:36.975: INFO: rc: 1 May 6 11:28:36.975: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-dfg6w ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001e1e270 exit status 1 true [0xc001ec4038 0xc001ec4050 0xc001ec4068] [0xc001ec4038 0xc001ec4050 0xc001ec4068] [0xc001ec4048 0xc001ec4060] [0x935700 0x935700] 0xc00213e480 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 6 11:28:46.975: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-dfg6w ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 6 11:28:47.070: INFO: rc: 1 May 6 11:28:47.070: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-dfg6w ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001f12300 exit status 1 true [0xc00114c320 0xc00114c460 0xc00114c5f0] [0xc00114c320 0xc00114c460 0xc00114c5f0] [0xc00114c458 0xc00114c5a0] [0x935700 0x935700] 0xc00177d140 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 6 11:28:57.070: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-dfg6w ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 6 11:28:57.173: INFO: rc: 1 May 6 11:28:57.173: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-dfg6w ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001f12450 exit status 1 true [0xc00114c660 0xc00114c6f8 0xc00114c7a8] [0xc00114c660 0xc00114c6f8 0xc00114c7a8] [0xc00114c6f0 0xc00114c728] [0x935700 0x935700] 0xc00177d3e0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 6 11:29:07.174: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-dfg6w ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 6 11:29:07.274: INFO: rc: 1 May 6 11:29:07.274: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-dfg6w ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001f125a0 exit status 1 true [0xc00114c7b0 0xc00114c810 0xc00114c8c0] [0xc00114c7b0 0xc00114c810 0xc00114c8c0] [0xc00114c7d8 0xc00114c8a0] [0x935700 0x935700] 0xc00177de60 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 6 11:29:17.275: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-dfg6w ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 6 11:29:17.414: INFO: rc: 1 May 6 11:29:17.414: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-dfg6w ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0013d8300 exit status 1 true [0xc00186c0c8 0xc00186c0e0 0xc00186c0f8] [0xc00186c0c8 0xc00186c0e0 0xc00186c0f8] [0xc00186c0d8 0xc00186c0f0] [0x935700 0x935700] 0xc0025c89c0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 6 11:29:27.414: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-dfg6w ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 6 11:29:27.511: INFO: rc: 1 May 6 11:29:27.511: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-dfg6w ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0013d84b0 exit status 1 true [0xc00186c100 0xc00186c118 0xc00186c130] [0xc00186c100 0xc00186c118 0xc00186c130] [0xc00186c110 0xc00186c128] [0x935700 0x935700] 0xc0025c8c60 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 6 11:29:37.511: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-dfg6w ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 6 11:29:37.600: INFO: rc: 1 May 6 11:29:37.600: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-dfg6w ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001f12120 exit status 1 true [0xc000510118 0xc00114c190 0xc00114c320] [0xc000510118 0xc00114c190 0xc00114c320] [0xc00114c168 0xc00114c2c0] [0x935700 0x935700] 0xc00213e1e0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 6 11:29:47.601: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-dfg6w ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 6 11:29:47.701: INFO: rc: 1 May 6 11:29:47.701: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-dfg6w ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0013d8120 exit status 1 true [0xc001ec4000 0xc001ec4018 0xc001ec4030] [0xc001ec4000 0xc001ec4018 0xc001ec4030] [0xc001ec4010 0xc001ec4028] [0x935700 0x935700] 0xc00177cea0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 6 11:29:57.701: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-dfg6w ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 6 11:29:57.790: INFO: rc: 1 May 6 11:29:57.790: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-dfg6w ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0020a4150 exit status 1 true [0xc00186c000 0xc00186c030 0xc00186c048] [0xc00186c000 0xc00186c030 0xc00186c048] [0xc00186c028 0xc00186c040] [0x935700 0x935700] 0xc0025c8720 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 6 11:30:07.790: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-dfg6w ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 6 11:30:07.879: INFO: rc: 1 May 6 11:30:07.879: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-dfg6w ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0013d82a0 exit status 1 true [0xc001ec4038 0xc001ec4050 0xc001ec4068] [0xc001ec4038 0xc001ec4050 0xc001ec4068] [0xc001ec4048 0xc001ec4060] [0x935700 0x935700] 0xc00177d140 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 6 11:30:17.879: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-dfg6w ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 6 11:30:17.960: INFO: rc: 1 May 6 11:30:17.960: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-dfg6w ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0020a4300 exit status 1 true [0xc00186c050 0xc00186c090 0xc00186c0b8] [0xc00186c050 0xc00186c090 0xc00186c0b8] [0xc00186c088 0xc00186c0b0] [0x935700 0x935700] 0xc0025c89c0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 6 11:30:27.961: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-dfg6w ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 6 11:30:28.124: INFO: rc: 1 May 6 11:30:28.124: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: May 6 11:30:28.124: INFO: Scaling statefulset ss to 0 May 6 11:30:28.134: INFO: Waiting for statefulset status.replicas updated to 0 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85 May 6 11:30:28.137: INFO: Deleting all statefulset in ns e2e-tests-statefulset-dfg6w May 6 11:30:28.139: INFO: Scaling statefulset ss to 0 May 6 11:30:28.148: INFO: Waiting for statefulset status.replicas updated to 0 May 6 11:30:28.150: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 11:30:28.171: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-statefulset-dfg6w" for this suite. May 6 11:30:34.190: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 11:30:34.228: INFO: namespace: e2e-tests-statefulset-dfg6w, resource: bindings, ignored listing per whitelist May 6 11:30:34.259: INFO: namespace e2e-tests-statefulset-dfg6w deletion completed in 6.084416833s • [SLOW TEST:363.493 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 11:30:34.259: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with secret that has name projected-secret-test-00297f22-8f8d-11ea-b5fe-0242ac110017 STEP: Creating a pod to test consume secrets May 6 11:30:34.503: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-00367a43-8f8d-11ea-b5fe-0242ac110017" in namespace "e2e-tests-projected-h22kk" to be "success or failure" May 6 11:30:34.506: INFO: Pod "pod-projected-secrets-00367a43-8f8d-11ea-b5fe-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 3.670124ms May 6 11:30:36.518: INFO: Pod "pod-projected-secrets-00367a43-8f8d-11ea-b5fe-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.0150634s May 6 11:30:38.522: INFO: Pod "pod-projected-secrets-00367a43-8f8d-11ea-b5fe-0242ac110017": Phase="Running", Reason="", readiness=true. Elapsed: 4.018827255s May 6 11:30:40.526: INFO: Pod "pod-projected-secrets-00367a43-8f8d-11ea-b5fe-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.023395421s STEP: Saw pod success May 6 11:30:40.526: INFO: Pod "pod-projected-secrets-00367a43-8f8d-11ea-b5fe-0242ac110017" satisfied condition "success or failure" May 6 11:30:40.529: INFO: Trying to get logs from node hunter-worker pod pod-projected-secrets-00367a43-8f8d-11ea-b5fe-0242ac110017 container projected-secret-volume-test: STEP: delete the pod May 6 11:30:40.566: INFO: Waiting for pod pod-projected-secrets-00367a43-8f8d-11ea-b5fe-0242ac110017 to disappear May 6 11:30:40.578: INFO: Pod pod-projected-secrets-00367a43-8f8d-11ea-b5fe-0242ac110017 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 11:30:40.578: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-h22kk" for this suite. May 6 11:30:46.605: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 11:30:46.679: INFO: namespace: e2e-tests-projected-h22kk, resource: bindings, ignored listing per whitelist May 6 11:30:46.681: INFO: namespace e2e-tests-projected-h22kk deletion completed in 6.09857872s • [SLOW TEST:12.422 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 11:30:46.681: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name cm-test-opt-del-078c57cd-8f8d-11ea-b5fe-0242ac110017 STEP: Creating configMap with name cm-test-opt-upd-078c5826-8f8d-11ea-b5fe-0242ac110017 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-078c57cd-8f8d-11ea-b5fe-0242ac110017 STEP: Updating configmap cm-test-opt-upd-078c5826-8f8d-11ea-b5fe-0242ac110017 STEP: Creating configMap with name cm-test-opt-create-078c584b-8f8d-11ea-b5fe-0242ac110017 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 11:30:56.960: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-qfdcf" for this suite. May 6 11:31:23.000: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 11:31:23.009: INFO: namespace: e2e-tests-projected-qfdcf, resource: bindings, ignored listing per whitelist May 6 11:31:23.080: INFO: namespace e2e-tests-projected-qfdcf deletion completed in 26.11591562s • [SLOW TEST:36.398 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 11:31:23.081: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0644 on tmpfs May 6 11:31:23.539: INFO: Waiting up to 5m0s for pod "pod-1d71317d-8f8d-11ea-b5fe-0242ac110017" in namespace "e2e-tests-emptydir-nvd54" to be "success or failure" May 6 11:31:23.586: INFO: Pod "pod-1d71317d-8f8d-11ea-b5fe-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 47.031822ms May 6 11:31:25.595: INFO: Pod "pod-1d71317d-8f8d-11ea-b5fe-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.055611906s May 6 11:31:27.599: INFO: Pod "pod-1d71317d-8f8d-11ea-b5fe-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 4.059809745s May 6 11:31:29.603: INFO: Pod "pod-1d71317d-8f8d-11ea-b5fe-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.063991237s STEP: Saw pod success May 6 11:31:29.603: INFO: Pod "pod-1d71317d-8f8d-11ea-b5fe-0242ac110017" satisfied condition "success or failure" May 6 11:31:29.607: INFO: Trying to get logs from node hunter-worker2 pod pod-1d71317d-8f8d-11ea-b5fe-0242ac110017 container test-container: STEP: delete the pod May 6 11:31:29.632: INFO: Waiting for pod pod-1d71317d-8f8d-11ea-b5fe-0242ac110017 to disappear May 6 11:31:29.671: INFO: Pod pod-1d71317d-8f8d-11ea-b5fe-0242ac110017 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 11:31:29.671: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-nvd54" for this suite. May 6 11:31:35.688: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 11:31:35.742: INFO: namespace: e2e-tests-emptydir-nvd54, resource: bindings, ignored listing per whitelist May 6 11:31:35.769: INFO: namespace e2e-tests-emptydir-nvd54 deletion completed in 6.093038212s • [SLOW TEST:12.687 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 11:31:35.769: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin May 6 11:31:35.924: INFO: Waiting up to 5m0s for pod "downwardapi-volume-24d2ccdd-8f8d-11ea-b5fe-0242ac110017" in namespace "e2e-tests-downward-api-97q7b" to be "success or failure" May 6 11:31:35.964: INFO: Pod "downwardapi-volume-24d2ccdd-8f8d-11ea-b5fe-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 40.680919ms May 6 11:31:37.970: INFO: Pod "downwardapi-volume-24d2ccdd-8f8d-11ea-b5fe-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.046491729s May 6 11:31:39.974: INFO: Pod "downwardapi-volume-24d2ccdd-8f8d-11ea-b5fe-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.050322415s STEP: Saw pod success May 6 11:31:39.974: INFO: Pod "downwardapi-volume-24d2ccdd-8f8d-11ea-b5fe-0242ac110017" satisfied condition "success or failure" May 6 11:31:39.977: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-24d2ccdd-8f8d-11ea-b5fe-0242ac110017 container client-container: STEP: delete the pod May 6 11:31:40.146: INFO: Waiting for pod downwardapi-volume-24d2ccdd-8f8d-11ea-b5fe-0242ac110017 to disappear May 6 11:31:40.217: INFO: Pod downwardapi-volume-24d2ccdd-8f8d-11ea-b5fe-0242ac110017 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 11:31:40.217: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-97q7b" for this suite. May 6 11:31:46.280: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 11:31:46.298: INFO: namespace: e2e-tests-downward-api-97q7b, resource: bindings, ignored listing per whitelist May 6 11:31:46.361: INFO: namespace e2e-tests-downward-api-97q7b deletion completed in 6.1398945s • [SLOW TEST:10.592 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 11:31:46.362: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Cleaning up the secret STEP: Cleaning up the configmap STEP: Cleaning up the pod [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 11:31:50.960: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-wrapper-mlmt9" for this suite. May 6 11:31:57.404: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 11:31:57.434: INFO: namespace: e2e-tests-emptydir-wrapper-mlmt9, resource: bindings, ignored listing per whitelist May 6 11:31:57.477: INFO: namespace e2e-tests-emptydir-wrapper-mlmt9 deletion completed in 6.321394389s • [SLOW TEST:11.116 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 11:31:57.477: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward api env vars May 6 11:31:57.595: INFO: Waiting up to 5m0s for pod "downward-api-31bd7e24-8f8d-11ea-b5fe-0242ac110017" in namespace "e2e-tests-downward-api-bf9j2" to be "success or failure" May 6 11:31:57.607: INFO: Pod "downward-api-31bd7e24-8f8d-11ea-b5fe-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 12.410474ms May 6 11:31:59.616: INFO: Pod "downward-api-31bd7e24-8f8d-11ea-b5fe-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020738072s May 6 11:32:01.620: INFO: Pod "downward-api-31bd7e24-8f8d-11ea-b5fe-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.025053419s STEP: Saw pod success May 6 11:32:01.620: INFO: Pod "downward-api-31bd7e24-8f8d-11ea-b5fe-0242ac110017" satisfied condition "success or failure" May 6 11:32:01.625: INFO: Trying to get logs from node hunter-worker pod downward-api-31bd7e24-8f8d-11ea-b5fe-0242ac110017 container dapi-container: STEP: delete the pod May 6 11:32:01.677: INFO: Waiting for pod downward-api-31bd7e24-8f8d-11ea-b5fe-0242ac110017 to disappear May 6 11:32:01.685: INFO: Pod downward-api-31bd7e24-8f8d-11ea-b5fe-0242ac110017 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 11:32:01.685: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-bf9j2" for this suite. May 6 11:32:07.702: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 11:32:07.713: INFO: namespace: e2e-tests-downward-api-bf9j2, resource: bindings, ignored listing per whitelist May 6 11:32:07.810: INFO: namespace e2e-tests-downward-api-bf9j2 deletion completed in 6.121093242s • [SLOW TEST:10.333 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38 should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 11:32:07.811: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 6 11:32:08.007: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"37ea26e7-8f8d-11ea-99e8-0242ac110002", Controller:(*bool)(0xc002037962), BlockOwnerDeletion:(*bool)(0xc002037963)}} May 6 11:32:08.077: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"37e9073d-8f8d-11ea-99e8-0242ac110002", Controller:(*bool)(0xc002037bf2), BlockOwnerDeletion:(*bool)(0xc002037bf3)}} May 6 11:32:08.102: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"37e99ae7-8f8d-11ea-99e8-0242ac110002", Controller:(*bool)(0xc00211b40a), BlockOwnerDeletion:(*bool)(0xc00211b40b)}} [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 11:32:13.154: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-tk7rx" for this suite. May 6 11:32:19.177: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 11:32:19.242: INFO: namespace: e2e-tests-gc-tk7rx, resource: bindings, ignored listing per whitelist May 6 11:32:19.259: INFO: namespace e2e-tests-gc-tk7rx deletion completed in 6.100580545s • [SLOW TEST:11.449 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 11:32:19.259: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79 May 6 11:32:19.397: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 6 11:32:19.467: INFO: Waiting for terminating namespaces to be deleted... May 6 11:32:19.470: INFO: Logging pods the kubelet thinks is on node hunter-worker before test May 6 11:32:19.477: INFO: kube-proxy-szbng from kube-system started at 2020-03-15 18:23:11 +0000 UTC (1 container statuses recorded) May 6 11:32:19.477: INFO: Container kube-proxy ready: true, restart count 0 May 6 11:32:19.477: INFO: kindnet-54h7m from kube-system started at 2020-03-15 18:23:12 +0000 UTC (1 container statuses recorded) May 6 11:32:19.477: INFO: Container kindnet-cni ready: true, restart count 0 May 6 11:32:19.477: INFO: coredns-54ff9cd656-4h7lb from kube-system started at 2020-03-15 18:23:32 +0000 UTC (1 container statuses recorded) May 6 11:32:19.477: INFO: Container coredns ready: true, restart count 0 May 6 11:32:19.477: INFO: Logging pods the kubelet thinks is on node hunter-worker2 before test May 6 11:32:19.482: INFO: kindnet-mtqrs from kube-system started at 2020-03-15 18:23:12 +0000 UTC (1 container statuses recorded) May 6 11:32:19.482: INFO: Container kindnet-cni ready: true, restart count 0 May 6 11:32:19.482: INFO: coredns-54ff9cd656-8vrkk from kube-system started at 2020-03-15 18:23:32 +0000 UTC (1 container statuses recorded) May 6 11:32:19.482: INFO: Container coredns ready: true, restart count 0 May 6 11:32:19.482: INFO: kube-proxy-s52ll from kube-system started at 2020-03-15 18:23:12 +0000 UTC (1 container statuses recorded) May 6 11:32:19.482: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-4130713e-8f8d-11ea-b5fe-0242ac110017 42 STEP: Trying to relaunch the pod, now with labels. STEP: removing the label kubernetes.io/e2e-4130713e-8f8d-11ea-b5fe-0242ac110017 off the node hunter-worker2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-4130713e-8f8d-11ea-b5fe-0242ac110017 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 11:32:28.030: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-sched-pred-ln9fz" for this suite. May 6 11:32:46.293: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 11:32:46.352: INFO: namespace: e2e-tests-sched-pred-ln9fz, resource: bindings, ignored listing per whitelist May 6 11:32:46.372: INFO: namespace e2e-tests-sched-pred-ln9fz deletion completed in 18.167543848s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70 • [SLOW TEST:27.112 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22 validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run rc should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 11:32:46.372: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1298 [It] should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine May 6 11:32:46.554: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=e2e-tests-kubectl-wn965' May 6 11:32:49.524: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" May 6 11:32:49.524: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n" STEP: verifying the rc e2e-test-nginx-rc was created STEP: verifying the pod controlled by rc e2e-test-nginx-rc was created STEP: confirm that you can get logs from an rc May 6 11:32:51.644: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-nginx-rc-cmjq9] May 6 11:32:51.644: INFO: Waiting up to 5m0s for pod "e2e-test-nginx-rc-cmjq9" in namespace "e2e-tests-kubectl-wn965" to be "running and ready" May 6 11:32:51.647: INFO: Pod "e2e-test-nginx-rc-cmjq9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.37478ms May 6 11:32:53.658: INFO: Pod "e2e-test-nginx-rc-cmjq9": Phase="Running", Reason="", readiness=true. Elapsed: 2.014142273s May 6 11:32:53.658: INFO: Pod "e2e-test-nginx-rc-cmjq9" satisfied condition "running and ready" May 6 11:32:53.658: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-nginx-rc-cmjq9] May 6 11:32:53.658: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs rc/e2e-test-nginx-rc --namespace=e2e-tests-kubectl-wn965' May 6 11:32:53.789: INFO: stderr: "" May 6 11:32:53.789: INFO: stdout: "" [AfterEach] [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1303 May 6 11:32:53.789: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=e2e-tests-kubectl-wn965' May 6 11:32:53.917: INFO: stderr: "" May 6 11:32:53.917: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 11:32:53.917: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-wn965" for this suite. May 6 11:32:59.951: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 11:32:59.992: INFO: namespace: e2e-tests-kubectl-wn965, resource: bindings, ignored listing per whitelist May 6 11:33:00.024: INFO: namespace e2e-tests-kubectl-wn965 deletion completed in 6.102641399s • [SLOW TEST:13.652 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-apps] Deployment deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 11:33:00.024: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65 [It] deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 6 11:33:00.239: INFO: Pod name cleanup-pod: Found 0 pods out of 1 May 6 11:33:05.244: INFO: Pod name cleanup-pod: Found 1 pods out of 1 STEP: ensuring each pod is running May 6 11:33:05.244: INFO: Creating deployment test-cleanup-deployment STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59 May 6 11:33:05.281: INFO: Deployment "test-cleanup-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment,GenerateName:,Namespace:e2e-tests-deployment-b2vfp,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-b2vfp/deployments/test-cleanup-deployment,UID:5a120193-8f8d-11ea-99e8-0242ac110002,ResourceVersion:9035346,Generation:1,CreationTimestamp:2020-05-06 11:33:05 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[],ReadyReplicas:0,CollisionCount:nil,},} May 6 11:33:05.292: INFO: New ReplicaSet of Deployment "test-cleanup-deployment" is nil. May 6 11:33:05.292: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment": May 6 11:33:05.292: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-controller,GenerateName:,Namespace:e2e-tests-deployment-b2vfp,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-b2vfp/replicasets/test-cleanup-controller,UID:570fe45f-8f8d-11ea-99e8-0242ac110002,ResourceVersion:9035347,Generation:1,CreationTimestamp:2020-05-06 11:33:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 Deployment test-cleanup-deployment 5a120193-8f8d-11ea-99e8-0242ac110002 0xc001f506f7 0xc001f506f8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} May 6 11:33:05.299: INFO: Pod "test-cleanup-controller-62l2w" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-controller-62l2w,GenerateName:test-cleanup-controller-,Namespace:e2e-tests-deployment-b2vfp,SelfLink:/api/v1/namespaces/e2e-tests-deployment-b2vfp/pods/test-cleanup-controller-62l2w,UID:5715feae-8f8d-11ea-99e8-0242ac110002,ResourceVersion:9035340,Generation:0,CreationTimestamp:2020-05-06 11:33:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-cleanup-controller 570fe45f-8f8d-11ea-99e8-0242ac110002 0xc001f51dd7 0xc001f51dd8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-bnxxx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-bnxxx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-bnxxx true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001f51f10} {node.kubernetes.io/unreachable Exists NoExecute 0xc001f51f30}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 11:33:00 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 11:33:03 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 11:33:03 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 11:33:00 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.3,PodIP:10.244.1.155,StartTime:2020-05-06 11:33:00 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-05-06 11:33:03 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://e8e2b69765178b6350882efe32c108918a20cb743cca7c88adc46b495eefcf1d}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 11:33:05.299: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-deployment-b2vfp" for this suite. May 6 11:33:11.395: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 11:33:11.434: INFO: namespace: e2e-tests-deployment-b2vfp, resource: bindings, ignored listing per whitelist May 6 11:33:11.474: INFO: namespace e2e-tests-deployment-b2vfp deletion completed in 6.157173756s • [SLOW TEST:11.450 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 11:33:11.475: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-5dd69075-8f8d-11ea-b5fe-0242ac110017 STEP: Creating a pod to test consume configMaps May 6 11:33:11.620: INFO: Waiting up to 5m0s for pod "pod-configmaps-5dd95e64-8f8d-11ea-b5fe-0242ac110017" in namespace "e2e-tests-configmap-nqbmg" to be "success or failure" May 6 11:33:11.625: INFO: Pod "pod-configmaps-5dd95e64-8f8d-11ea-b5fe-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 4.899188ms May 6 11:33:13.695: INFO: Pod "pod-configmaps-5dd95e64-8f8d-11ea-b5fe-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.075673533s May 6 11:33:15.700: INFO: Pod "pod-configmaps-5dd95e64-8f8d-11ea-b5fe-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 4.08063619s May 6 11:33:17.703: INFO: Pod "pod-configmaps-5dd95e64-8f8d-11ea-b5fe-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.083760665s STEP: Saw pod success May 6 11:33:17.703: INFO: Pod "pod-configmaps-5dd95e64-8f8d-11ea-b5fe-0242ac110017" satisfied condition "success or failure" May 6 11:33:17.706: INFO: Trying to get logs from node hunter-worker pod pod-configmaps-5dd95e64-8f8d-11ea-b5fe-0242ac110017 container configmap-volume-test: STEP: delete the pod May 6 11:33:17.761: INFO: Waiting for pod pod-configmaps-5dd95e64-8f8d-11ea-b5fe-0242ac110017 to disappear May 6 11:33:17.808: INFO: Pod pod-configmaps-5dd95e64-8f8d-11ea-b5fe-0242ac110017 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 11:33:17.809: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-nqbmg" for this suite. May 6 11:33:25.841: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 11:33:25.949: INFO: namespace: e2e-tests-configmap-nqbmg, resource: bindings, ignored listing per whitelist May 6 11:33:25.954: INFO: namespace e2e-tests-configmap-nqbmg deletion completed in 8.141686712s • [SLOW TEST:14.479 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 11:33:25.954: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 11:33:30.143: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubelet-test-hmvgl" for this suite. May 6 11:34:10.157: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 11:34:10.166: INFO: namespace: e2e-tests-kubelet-test-hmvgl, resource: bindings, ignored listing per whitelist May 6 11:34:10.233: INFO: namespace e2e-tests-kubelet-test-hmvgl deletion completed in 40.087342519s • [SLOW TEST:44.279 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when scheduling a busybox command in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:40 should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 11:34:10.234: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-80de31b3-8f8d-11ea-b5fe-0242ac110017 STEP: Creating a pod to test consume configMaps May 6 11:34:10.372: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-80ded768-8f8d-11ea-b5fe-0242ac110017" in namespace "e2e-tests-projected-bzjgz" to be "success or failure" May 6 11:34:10.382: INFO: Pod "pod-projected-configmaps-80ded768-8f8d-11ea-b5fe-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 10.557633ms May 6 11:34:12.387: INFO: Pod "pod-projected-configmaps-80ded768-8f8d-11ea-b5fe-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014996358s May 6 11:34:14.391: INFO: Pod "pod-projected-configmaps-80ded768-8f8d-11ea-b5fe-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.019526711s STEP: Saw pod success May 6 11:34:14.391: INFO: Pod "pod-projected-configmaps-80ded768-8f8d-11ea-b5fe-0242ac110017" satisfied condition "success or failure" May 6 11:34:14.395: INFO: Trying to get logs from node hunter-worker pod pod-projected-configmaps-80ded768-8f8d-11ea-b5fe-0242ac110017 container projected-configmap-volume-test: STEP: delete the pod May 6 11:34:14.458: INFO: Waiting for pod pod-projected-configmaps-80ded768-8f8d-11ea-b5fe-0242ac110017 to disappear May 6 11:34:14.466: INFO: Pod pod-projected-configmaps-80ded768-8f8d-11ea-b5fe-0242ac110017 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 11:34:14.466: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-bzjgz" for this suite. May 6 11:34:20.482: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 11:34:20.540: INFO: namespace: e2e-tests-projected-bzjgz, resource: bindings, ignored listing per whitelist May 6 11:34:20.566: INFO: namespace e2e-tests-projected-bzjgz deletion completed in 6.096669018s • [SLOW TEST:10.332 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Proxy server should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 11:34:20.566: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Starting the proxy May 6 11:34:20.652: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix657700967/test' STEP: retrieving proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 11:34:20.731: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-86rcs" for this suite. May 6 11:34:26.743: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 11:34:26.787: INFO: namespace: e2e-tests-kubectl-86rcs, resource: bindings, ignored listing per whitelist May 6 11:34:26.819: INFO: namespace e2e-tests-kubectl-86rcs deletion completed in 6.085719523s • [SLOW TEST:6.253 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Proxy server /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl version should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 11:34:26.820: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 6 11:34:26.910: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version' May 6 11:34:27.058: INFO: stderr: "" May 6 11:34:27.058: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"13\", GitVersion:\"v1.13.12\", GitCommit:\"a8b52209ee172232b6db7a6e0ce2adc77458829f\", GitTreeState:\"clean\", BuildDate:\"2020-05-02T15:37:06Z\", GoVersion:\"go1.11.13\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"13\", GitVersion:\"v1.13.12\", GitCommit:\"a8b52209ee172232b6db7a6e0ce2adc77458829f\", GitTreeState:\"clean\", BuildDate:\"2020-01-14T01:07:14Z\", GoVersion:\"go1.11.13\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 11:34:27.058: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-7gtwp" for this suite. May 6 11:34:33.080: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 11:34:33.094: INFO: namespace: e2e-tests-kubectl-7gtwp, resource: bindings, ignored listing per whitelist May 6 11:34:33.162: INFO: namespace e2e-tests-kubectl-7gtwp deletion completed in 6.099643354s • [SLOW TEST:6.342 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl version /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 11:34:33.162: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0644 on tmpfs May 6 11:34:33.263: INFO: Waiting up to 5m0s for pod "pod-8e85e82d-8f8d-11ea-b5fe-0242ac110017" in namespace "e2e-tests-emptydir-fcbfw" to be "success or failure" May 6 11:34:33.273: INFO: Pod "pod-8e85e82d-8f8d-11ea-b5fe-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 9.728811ms May 6 11:34:35.797: INFO: Pod "pod-8e85e82d-8f8d-11ea-b5fe-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.53389529s May 6 11:34:37.801: INFO: Pod "pod-8e85e82d-8f8d-11ea-b5fe-0242ac110017": Phase="Running", Reason="", readiness=true. Elapsed: 4.53746041s May 6 11:34:39.805: INFO: Pod "pod-8e85e82d-8f8d-11ea-b5fe-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.541664421s STEP: Saw pod success May 6 11:34:39.805: INFO: Pod "pod-8e85e82d-8f8d-11ea-b5fe-0242ac110017" satisfied condition "success or failure" May 6 11:34:39.808: INFO: Trying to get logs from node hunter-worker2 pod pod-8e85e82d-8f8d-11ea-b5fe-0242ac110017 container test-container: STEP: delete the pod May 6 11:34:39.852: INFO: Waiting for pod pod-8e85e82d-8f8d-11ea-b5fe-0242ac110017 to disappear May 6 11:34:39.856: INFO: Pod pod-8e85e82d-8f8d-11ea-b5fe-0242ac110017 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 11:34:39.856: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-fcbfw" for this suite. May 6 11:34:45.923: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 11:34:45.975: INFO: namespace: e2e-tests-emptydir-fcbfw, resource: bindings, ignored listing per whitelist May 6 11:34:46.039: INFO: namespace e2e-tests-emptydir-fcbfw deletion completed in 6.179553769s • [SLOW TEST:12.877 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 11:34:46.039: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43 [It] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod May 6 11:34:46.235: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 11:34:54.293: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-init-container-kk6x7" for this suite. May 6 11:35:16.362: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 11:35:16.381: INFO: namespace: e2e-tests-init-container-kk6x7, resource: bindings, ignored listing per whitelist May 6 11:35:16.443: INFO: namespace e2e-tests-init-container-kk6x7 deletion completed in 22.102415469s • [SLOW TEST:30.403 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 11:35:16.443: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102 [It] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 6 11:35:16.647: INFO: Creating daemon "daemon-set" with a node selector STEP: Initially, daemon pods should not be running on any nodes. May 6 11:35:16.653: INFO: Number of nodes with available pods: 0 May 6 11:35:16.653: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Change node label to blue, check that daemon pod is launched. May 6 11:35:16.708: INFO: Number of nodes with available pods: 0 May 6 11:35:16.708: INFO: Node hunter-worker is running more than one daemon pod May 6 11:35:17.712: INFO: Number of nodes with available pods: 0 May 6 11:35:17.712: INFO: Node hunter-worker is running more than one daemon pod May 6 11:35:18.739: INFO: Number of nodes with available pods: 0 May 6 11:35:18.739: INFO: Node hunter-worker is running more than one daemon pod May 6 11:35:19.713: INFO: Number of nodes with available pods: 0 May 6 11:35:19.713: INFO: Node hunter-worker is running more than one daemon pod May 6 11:35:20.712: INFO: Number of nodes with available pods: 0 May 6 11:35:20.713: INFO: Node hunter-worker is running more than one daemon pod May 6 11:35:21.712: INFO: Number of nodes with available pods: 1 May 6 11:35:21.712: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Update the node label to green, and wait for daemons to be unscheduled May 6 11:35:21.749: INFO: Number of nodes with available pods: 1 May 6 11:35:21.749: INFO: Number of running nodes: 0, number of available pods: 1 May 6 11:35:22.766: INFO: Number of nodes with available pods: 0 May 6 11:35:22.766: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate May 6 11:35:22.796: INFO: Number of nodes with available pods: 0 May 6 11:35:22.796: INFO: Node hunter-worker is running more than one daemon pod May 6 11:35:23.817: INFO: Number of nodes with available pods: 0 May 6 11:35:23.817: INFO: Node hunter-worker is running more than one daemon pod May 6 11:35:24.835: INFO: Number of nodes with available pods: 0 May 6 11:35:24.835: INFO: Node hunter-worker is running more than one daemon pod May 6 11:35:25.801: INFO: Number of nodes with available pods: 0 May 6 11:35:25.801: INFO: Node hunter-worker is running more than one daemon pod May 6 11:35:26.801: INFO: Number of nodes with available pods: 0 May 6 11:35:26.801: INFO: Node hunter-worker is running more than one daemon pod May 6 11:35:27.800: INFO: Number of nodes with available pods: 0 May 6 11:35:27.800: INFO: Node hunter-worker is running more than one daemon pod May 6 11:35:28.800: INFO: Number of nodes with available pods: 0 May 6 11:35:28.800: INFO: Node hunter-worker is running more than one daemon pod May 6 11:35:29.800: INFO: Number of nodes with available pods: 0 May 6 11:35:29.800: INFO: Node hunter-worker is running more than one daemon pod May 6 11:35:30.799: INFO: Number of nodes with available pods: 1 May 6 11:35:30.799: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-tr5b7, will wait for the garbage collector to delete the pods May 6 11:35:30.858: INFO: Deleting DaemonSet.extensions daemon-set took: 4.644263ms May 6 11:35:30.958: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.205599ms May 6 11:35:41.362: INFO: Number of nodes with available pods: 0 May 6 11:35:41.362: INFO: Number of running nodes: 0, number of available pods: 0 May 6 11:35:41.368: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-tr5b7/daemonsets","resourceVersion":"9035891"},"items":null} May 6 11:35:41.371: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-tr5b7/pods","resourceVersion":"9035891"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 11:35:41.401: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-daemonsets-tr5b7" for this suite. May 6 11:35:47.417: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 11:35:47.471: INFO: namespace: e2e-tests-daemonsets-tr5b7, resource: bindings, ignored listing per whitelist May 6 11:35:47.475: INFO: namespace e2e-tests-daemonsets-tr5b7 deletion completed in 6.070310618s • [SLOW TEST:31.032 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 11:35:47.475: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 6 11:35:47.581: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 11:35:51.626: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-fndg5" for this suite. May 6 11:36:31.643: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 11:36:31.660: INFO: namespace: e2e-tests-pods-fndg5, resource: bindings, ignored listing per whitelist May 6 11:36:31.709: INFO: namespace e2e-tests-pods-fndg5 deletion completed in 40.080526701s • [SLOW TEST:44.235 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 11:36:31.710: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the rc STEP: delete the rc STEP: wait for all pods to be garbage collected STEP: Gathering metrics W0506 11:36:41.975613 7 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 6 11:36:41.975: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 11:36:41.975: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-9ptrg" for this suite. May 6 11:36:47.993: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 11:36:48.004: INFO: namespace: e2e-tests-gc-9ptrg, resource: bindings, ignored listing per whitelist May 6 11:36:48.073: INFO: namespace e2e-tests-gc-9ptrg deletion completed in 6.09479177s • [SLOW TEST:16.364 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 11:36:48.074: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin May 6 11:36:48.178: INFO: Waiting up to 5m0s for pod "downwardapi-volume-def05add-8f8d-11ea-b5fe-0242ac110017" in namespace "e2e-tests-downward-api-2tll7" to be "success or failure" May 6 11:36:48.190: INFO: Pod "downwardapi-volume-def05add-8f8d-11ea-b5fe-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 12.21092ms May 6 11:36:50.194: INFO: Pod "downwardapi-volume-def05add-8f8d-11ea-b5fe-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016634271s May 6 11:36:52.198: INFO: Pod "downwardapi-volume-def05add-8f8d-11ea-b5fe-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.020608302s STEP: Saw pod success May 6 11:36:52.198: INFO: Pod "downwardapi-volume-def05add-8f8d-11ea-b5fe-0242ac110017" satisfied condition "success or failure" May 6 11:36:52.201: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-def05add-8f8d-11ea-b5fe-0242ac110017 container client-container: STEP: delete the pod May 6 11:36:52.226: INFO: Waiting for pod downwardapi-volume-def05add-8f8d-11ea-b5fe-0242ac110017 to disappear May 6 11:36:52.230: INFO: Pod downwardapi-volume-def05add-8f8d-11ea-b5fe-0242ac110017 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 11:36:52.230: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-2tll7" for this suite. May 6 11:36:58.288: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 11:36:58.338: INFO: namespace: e2e-tests-downward-api-2tll7, resource: bindings, ignored listing per whitelist May 6 11:36:58.349: INFO: namespace e2e-tests-downward-api-2tll7 deletion completed in 6.116012544s • [SLOW TEST:10.276 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [k8s.io] Pods should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 11:36:58.350: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating pod May 6 11:37:04.553: INFO: Pod pod-hostip-e515907b-8f8d-11ea-b5fe-0242ac110017 has hostIP: 172.17.0.4 [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 11:37:04.553: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-hfkqq" for this suite. May 6 11:37:26.582: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 11:37:26.602: INFO: namespace: e2e-tests-pods-hfkqq, resource: bindings, ignored listing per whitelist May 6 11:37:26.656: INFO: namespace e2e-tests-pods-hfkqq deletion completed in 22.101085s • [SLOW TEST:28.307 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 11:37:26.657: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a watch on configmaps STEP: creating a new configmap STEP: modifying the configmap once STEP: closing the watch once it receives two notifications May 6 11:37:27.188: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-2w7dt,SelfLink:/api/v1/namespaces/e2e-tests-watch-2w7dt/configmaps/e2e-watch-test-watch-closed,UID:f628bc23-8f8d-11ea-99e8-0242ac110002,ResourceVersion:9036238,Generation:0,CreationTimestamp:2020-05-06 11:37:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} May 6 11:37:27.188: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-2w7dt,SelfLink:/api/v1/namespaces/e2e-tests-watch-2w7dt/configmaps/e2e-watch-test-watch-closed,UID:f628bc23-8f8d-11ea-99e8-0242ac110002,ResourceVersion:9036240,Generation:0,CreationTimestamp:2020-05-06 11:37:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time, while the watch is closed STEP: creating a new watch on configmaps from the last resource version observed by the first watch STEP: deleting the configmap STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed May 6 11:37:27.263: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-2w7dt,SelfLink:/api/v1/namespaces/e2e-tests-watch-2w7dt/configmaps/e2e-watch-test-watch-closed,UID:f628bc23-8f8d-11ea-99e8-0242ac110002,ResourceVersion:9036241,Generation:0,CreationTimestamp:2020-05-06 11:37:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} May 6 11:37:27.263: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-2w7dt,SelfLink:/api/v1/namespaces/e2e-tests-watch-2w7dt/configmaps/e2e-watch-test-watch-closed,UID:f628bc23-8f8d-11ea-99e8-0242ac110002,ResourceVersion:9036242,Generation:0,CreationTimestamp:2020-05-06 11:37:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 11:37:27.263: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-watch-2w7dt" for this suite. May 6 11:37:33.348: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 11:37:33.424: INFO: namespace: e2e-tests-watch-2w7dt, resource: bindings, ignored listing per whitelist May 6 11:37:33.424: INFO: namespace e2e-tests-watch-2w7dt deletion completed in 6.090275371s • [SLOW TEST:6.768 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 11:37:33.425: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-upd-fa00df86-8f8d-11ea-b5fe-0242ac110017 STEP: Creating the pod STEP: Updating configmap configmap-test-upd-fa00df86-8f8d-11ea-b5fe-0242ac110017 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 11:37:39.628: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-jwz47" for this suite. May 6 11:38:01.793: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 11:38:01.816: INFO: namespace: e2e-tests-configmap-jwz47, resource: bindings, ignored listing per whitelist May 6 11:38:01.869: INFO: namespace e2e-tests-configmap-jwz47 deletion completed in 22.236424867s • [SLOW TEST:28.444 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 11:38:01.869: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating the pod May 6 11:38:09.042: INFO: Successfully updated pod "labelsupdate0b1df888-8f8e-11ea-b5fe-0242ac110017" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 11:38:11.064: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-qrnqz" for this suite. May 6 11:38:33.085: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 11:38:33.117: INFO: namespace: e2e-tests-projected-qrnqz, resource: bindings, ignored listing per whitelist May 6 11:38:33.163: INFO: namespace e2e-tests-projected-qrnqz deletion completed in 22.095001127s • [SLOW TEST:31.294 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 11:38:33.163: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: retrieving the pod May 6 11:38:37.272: INFO: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:send-events-1d9135f1-8f8e-11ea-b5fe-0242ac110017,GenerateName:,Namespace:e2e-tests-events-ppk2g,SelfLink:/api/v1/namespaces/e2e-tests-events-ppk2g/pods/send-events-1d9135f1-8f8e-11ea-b5fe-0242ac110017,UID:1d937f97-8f8e-11ea-99e8-0242ac110002,ResourceVersion:9036447,Generation:0,CreationTimestamp:2020-05-06 11:38:33 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: foo,time: 240530154,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-5sw8r {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5sw8r,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{p gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 [] [] [{ 0 80 TCP }] [] [] {map[] map[]} [{default-token-5sw8r true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0014d31c0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0014d31e0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 11:38:33 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 11:38:36 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 11:38:36 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 11:38:33 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.4,PodIP:10.244.2.138,StartTime:2020-05-06 11:38:33 +0000 UTC,ContainerStatuses:[{p {nil ContainerStateRunning{StartedAt:2020-05-06 11:38:35 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 gcr.io/kubernetes-e2e-test-images/serve-hostname@sha256:bab70473a6d8ef65a22625dc9a1b0f0452e811530fdbe77e4408523460177ff1 containerd://ff79b1f79b633cd24f9ed2ca0f8a0746c74b69e68939a3e7d5a8cfbb8dc4c2d5}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} STEP: checking for scheduler event about the pod May 6 11:38:39.277: INFO: Saw scheduler event for our pod. STEP: checking for kubelet event about the pod May 6 11:38:41.281: INFO: Saw kubelet event for our pod. STEP: deleting the pod [AfterEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 11:38:41.286: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-events-ppk2g" for this suite. May 6 11:39:19.302: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 11:39:19.341: INFO: namespace: e2e-tests-events-ppk2g, resource: bindings, ignored listing per whitelist May 6 11:39:19.378: INFO: namespace e2e-tests-events-ppk2g deletion completed in 38.08529533s • [SLOW TEST:46.214 seconds] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 11:39:19.378: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the rs STEP: Gathering metrics W0506 11:39:50.047915 7 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 6 11:39:50.047: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 11:39:50.048: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-zzff5" for this suite. May 6 11:39:58.066: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 11:39:58.155: INFO: namespace: e2e-tests-gc-zzff5, resource: bindings, ignored listing per whitelist May 6 11:39:58.158: INFO: namespace e2e-tests-gc-zzff5 deletion completed in 8.107484059s • [SLOW TEST:38.780 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl expose should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 11:39:58.158: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating Redis RC May 6 11:39:58.274: INFO: namespace e2e-tests-kubectl-67phc May 6 11:39:58.274: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-67phc' May 6 11:39:58.545: INFO: stderr: "" May 6 11:39:58.545: INFO: stdout: "replicationcontroller/redis-master created\n" STEP: Waiting for Redis master to start. May 6 11:39:59.550: INFO: Selector matched 1 pods for map[app:redis] May 6 11:39:59.550: INFO: Found 0 / 1 May 6 11:40:00.636: INFO: Selector matched 1 pods for map[app:redis] May 6 11:40:00.636: INFO: Found 0 / 1 May 6 11:40:01.551: INFO: Selector matched 1 pods for map[app:redis] May 6 11:40:01.551: INFO: Found 0 / 1 May 6 11:40:02.550: INFO: Selector matched 1 pods for map[app:redis] May 6 11:40:02.550: INFO: Found 1 / 1 May 6 11:40:02.550: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 May 6 11:40:02.553: INFO: Selector matched 1 pods for map[app:redis] May 6 11:40:02.553: INFO: ForEach: Found 1 pods from the filter. Now looping through them. May 6 11:40:02.553: INFO: wait on redis-master startup in e2e-tests-kubectl-67phc May 6 11:40:02.553: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-n4844 redis-master --namespace=e2e-tests-kubectl-67phc' May 6 11:40:02.671: INFO: stderr: "" May 6 11:40:02.671: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 06 May 11:40:01.419 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 06 May 11:40:01.419 # Server started, Redis version 3.2.12\n1:M 06 May 11:40:01.419 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 06 May 11:40:01.419 * The server is now ready to accept connections on port 6379\n" STEP: exposing RC May 6 11:40:02.671: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose rc redis-master --name=rm2 --port=1234 --target-port=6379 --namespace=e2e-tests-kubectl-67phc' May 6 11:40:02.854: INFO: stderr: "" May 6 11:40:02.854: INFO: stdout: "service/rm2 exposed\n" May 6 11:40:02.856: INFO: Service rm2 in namespace e2e-tests-kubectl-67phc found. STEP: exposing service May 6 11:40:04.863: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=e2e-tests-kubectl-67phc' May 6 11:40:05.048: INFO: stderr: "" May 6 11:40:05.048: INFO: stdout: "service/rm3 exposed\n" May 6 11:40:05.062: INFO: Service rm3 in namespace e2e-tests-kubectl-67phc found. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 11:40:07.070: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-67phc" for this suite. May 6 11:40:29.088: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 11:40:29.151: INFO: namespace: e2e-tests-kubectl-67phc, resource: bindings, ignored listing per whitelist May 6 11:40:29.169: INFO: namespace e2e-tests-kubectl-67phc deletion completed in 22.095040225s • [SLOW TEST:31.011 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl expose /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 11:40:29.169: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-map-62bba478-8f8e-11ea-b5fe-0242ac110017 STEP: Creating a pod to test consume configMaps May 6 11:40:29.303: INFO: Waiting up to 5m0s for pod "pod-configmaps-62bdb58c-8f8e-11ea-b5fe-0242ac110017" in namespace "e2e-tests-configmap-hr5rh" to be "success or failure" May 6 11:40:29.307: INFO: Pod "pod-configmaps-62bdb58c-8f8e-11ea-b5fe-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 3.980805ms May 6 11:40:31.384: INFO: Pod "pod-configmaps-62bdb58c-8f8e-11ea-b5fe-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.081442728s May 6 11:40:33.388: INFO: Pod "pod-configmaps-62bdb58c-8f8e-11ea-b5fe-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.08504891s STEP: Saw pod success May 6 11:40:33.388: INFO: Pod "pod-configmaps-62bdb58c-8f8e-11ea-b5fe-0242ac110017" satisfied condition "success or failure" May 6 11:40:33.390: INFO: Trying to get logs from node hunter-worker pod pod-configmaps-62bdb58c-8f8e-11ea-b5fe-0242ac110017 container configmap-volume-test: STEP: delete the pod May 6 11:40:33.423: INFO: Waiting for pod pod-configmaps-62bdb58c-8f8e-11ea-b5fe-0242ac110017 to disappear May 6 11:40:33.438: INFO: Pod pod-configmaps-62bdb58c-8f8e-11ea-b5fe-0242ac110017 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 11:40:33.439: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-hr5rh" for this suite. May 6 11:40:39.454: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 11:40:39.480: INFO: namespace: e2e-tests-configmap-hr5rh, resource: bindings, ignored listing per whitelist May 6 11:40:39.527: INFO: namespace e2e-tests-configmap-hr5rh deletion completed in 6.085532536s • [SLOW TEST:10.359 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 11:40:39.528: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102 [It] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. May 6 11:40:39.756: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 11:40:39.759: INFO: Number of nodes with available pods: 0 May 6 11:40:39.759: INFO: Node hunter-worker is running more than one daemon pod May 6 11:40:40.768: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 11:40:40.772: INFO: Number of nodes with available pods: 0 May 6 11:40:40.772: INFO: Node hunter-worker is running more than one daemon pod May 6 11:40:41.860: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 11:40:41.863: INFO: Number of nodes with available pods: 0 May 6 11:40:41.863: INFO: Node hunter-worker is running more than one daemon pod May 6 11:40:42.883: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 11:40:42.886: INFO: Number of nodes with available pods: 0 May 6 11:40:42.886: INFO: Node hunter-worker is running more than one daemon pod May 6 11:40:43.790: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 11:40:43.794: INFO: Number of nodes with available pods: 1 May 6 11:40:43.794: INFO: Node hunter-worker2 is running more than one daemon pod May 6 11:40:44.764: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 11:40:44.768: INFO: Number of nodes with available pods: 2 May 6 11:40:44.768: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Stop a daemon pod, check that the daemon pod is revived. May 6 11:40:44.818: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 11:40:44.821: INFO: Number of nodes with available pods: 1 May 6 11:40:44.821: INFO: Node hunter-worker2 is running more than one daemon pod May 6 11:40:45.826: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 11:40:45.829: INFO: Number of nodes with available pods: 1 May 6 11:40:45.829: INFO: Node hunter-worker2 is running more than one daemon pod May 6 11:40:46.864: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 11:40:46.868: INFO: Number of nodes with available pods: 1 May 6 11:40:46.868: INFO: Node hunter-worker2 is running more than one daemon pod May 6 11:40:47.826: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 11:40:47.829: INFO: Number of nodes with available pods: 1 May 6 11:40:47.829: INFO: Node hunter-worker2 is running more than one daemon pod May 6 11:40:48.825: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 11:40:48.829: INFO: Number of nodes with available pods: 1 May 6 11:40:48.829: INFO: Node hunter-worker2 is running more than one daemon pod May 6 11:40:49.826: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 11:40:49.830: INFO: Number of nodes with available pods: 1 May 6 11:40:49.830: INFO: Node hunter-worker2 is running more than one daemon pod May 6 11:40:50.840: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 11:40:50.843: INFO: Number of nodes with available pods: 1 May 6 11:40:50.843: INFO: Node hunter-worker2 is running more than one daemon pod May 6 11:40:51.828: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 11:40:51.832: INFO: Number of nodes with available pods: 1 May 6 11:40:51.832: INFO: Node hunter-worker2 is running more than one daemon pod May 6 11:40:52.871: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 11:40:52.875: INFO: Number of nodes with available pods: 1 May 6 11:40:52.875: INFO: Node hunter-worker2 is running more than one daemon pod May 6 11:40:53.826: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 11:40:53.828: INFO: Number of nodes with available pods: 1 May 6 11:40:53.828: INFO: Node hunter-worker2 is running more than one daemon pod May 6 11:40:54.825: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 11:40:54.828: INFO: Number of nodes with available pods: 1 May 6 11:40:54.828: INFO: Node hunter-worker2 is running more than one daemon pod May 6 11:40:55.826: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 11:40:55.830: INFO: Number of nodes with available pods: 2 May 6 11:40:55.830: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-nh2wc, will wait for the garbage collector to delete the pods May 6 11:40:55.892: INFO: Deleting DaemonSet.extensions daemon-set took: 6.012558ms May 6 11:40:55.993: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.444537ms May 6 11:41:01.296: INFO: Number of nodes with available pods: 0 May 6 11:41:01.296: INFO: Number of running nodes: 0, number of available pods: 0 May 6 11:41:01.299: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-nh2wc/daemonsets","resourceVersion":"9036919"},"items":null} May 6 11:41:01.301: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-nh2wc/pods","resourceVersion":"9036919"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 11:41:01.310: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-daemonsets-nh2wc" for this suite. May 6 11:41:07.341: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 11:41:07.437: INFO: namespace: e2e-tests-daemonsets-nh2wc, resource: bindings, ignored listing per whitelist May 6 11:41:07.440: INFO: namespace e2e-tests-daemonsets-nh2wc deletion completed in 6.1276402s • [SLOW TEST:27.912 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 11:41:07.440: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:204 [It] should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying QOS class is set on the pod [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 11:41:07.582: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-vps78" for this suite. May 6 11:41:29.674: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 11:41:29.742: INFO: namespace: e2e-tests-pods-vps78, resource: bindings, ignored listing per whitelist May 6 11:41:29.750: INFO: namespace e2e-tests-pods-vps78 deletion completed in 22.154064386s • [SLOW TEST:22.310 seconds] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 11:41:29.750: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin May 6 11:41:29.880: INFO: Waiting up to 5m0s for pod "downwardapi-volume-86d9718f-8f8e-11ea-b5fe-0242ac110017" in namespace "e2e-tests-downward-api-qqbbw" to be "success or failure" May 6 11:41:29.900: INFO: Pod "downwardapi-volume-86d9718f-8f8e-11ea-b5fe-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 19.889668ms May 6 11:41:31.904: INFO: Pod "downwardapi-volume-86d9718f-8f8e-11ea-b5fe-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023498103s May 6 11:41:33.908: INFO: Pod "downwardapi-volume-86d9718f-8f8e-11ea-b5fe-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.027858151s STEP: Saw pod success May 6 11:41:33.908: INFO: Pod "downwardapi-volume-86d9718f-8f8e-11ea-b5fe-0242ac110017" satisfied condition "success or failure" May 6 11:41:33.911: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-86d9718f-8f8e-11ea-b5fe-0242ac110017 container client-container: STEP: delete the pod May 6 11:41:33.933: INFO: Waiting for pod downwardapi-volume-86d9718f-8f8e-11ea-b5fe-0242ac110017 to disappear May 6 11:41:33.937: INFO: Pod downwardapi-volume-86d9718f-8f8e-11ea-b5fe-0242ac110017 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 11:41:33.938: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-qqbbw" for this suite. May 6 11:41:39.953: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 11:41:40.002: INFO: namespace: e2e-tests-downward-api-qqbbw, resource: bindings, ignored listing per whitelist May 6 11:41:40.027: INFO: namespace e2e-tests-downward-api-qqbbw deletion completed in 6.085798766s • [SLOW TEST:10.277 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 11:41:40.027: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: getting the auto-created API token STEP: Creating a pod to test consume service account token May 6 11:41:40.717: INFO: Waiting up to 5m0s for pod "pod-service-account-8d4b3995-8f8e-11ea-b5fe-0242ac110017-9bdbj" in namespace "e2e-tests-svcaccounts-6wrz4" to be "success or failure" May 6 11:41:40.734: INFO: Pod "pod-service-account-8d4b3995-8f8e-11ea-b5fe-0242ac110017-9bdbj": Phase="Pending", Reason="", readiness=false. Elapsed: 16.475059ms May 6 11:41:42.757: INFO: Pod "pod-service-account-8d4b3995-8f8e-11ea-b5fe-0242ac110017-9bdbj": Phase="Pending", Reason="", readiness=false. Elapsed: 2.039238806s May 6 11:41:44.852: INFO: Pod "pod-service-account-8d4b3995-8f8e-11ea-b5fe-0242ac110017-9bdbj": Phase="Pending", Reason="", readiness=false. Elapsed: 4.134964185s May 6 11:41:46.855: INFO: Pod "pod-service-account-8d4b3995-8f8e-11ea-b5fe-0242ac110017-9bdbj": Phase="Pending", Reason="", readiness=false. Elapsed: 6.137951295s May 6 11:41:48.859: INFO: Pod "pod-service-account-8d4b3995-8f8e-11ea-b5fe-0242ac110017-9bdbj": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.141807992s STEP: Saw pod success May 6 11:41:48.859: INFO: Pod "pod-service-account-8d4b3995-8f8e-11ea-b5fe-0242ac110017-9bdbj" satisfied condition "success or failure" May 6 11:41:48.862: INFO: Trying to get logs from node hunter-worker pod pod-service-account-8d4b3995-8f8e-11ea-b5fe-0242ac110017-9bdbj container token-test: STEP: delete the pod May 6 11:41:48.878: INFO: Waiting for pod pod-service-account-8d4b3995-8f8e-11ea-b5fe-0242ac110017-9bdbj to disappear May 6 11:41:48.899: INFO: Pod pod-service-account-8d4b3995-8f8e-11ea-b5fe-0242ac110017-9bdbj no longer exists STEP: Creating a pod to test consume service account root CA May 6 11:41:48.903: INFO: Waiting up to 5m0s for pod "pod-service-account-8d4b3995-8f8e-11ea-b5fe-0242ac110017-dqxnf" in namespace "e2e-tests-svcaccounts-6wrz4" to be "success or failure" May 6 11:41:48.913: INFO: Pod "pod-service-account-8d4b3995-8f8e-11ea-b5fe-0242ac110017-dqxnf": Phase="Pending", Reason="", readiness=false. Elapsed: 10.409498ms May 6 11:41:50.917: INFO: Pod "pod-service-account-8d4b3995-8f8e-11ea-b5fe-0242ac110017-dqxnf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014306596s May 6 11:41:52.921: INFO: Pod "pod-service-account-8d4b3995-8f8e-11ea-b5fe-0242ac110017-dqxnf": Phase="Pending", Reason="", readiness=false. Elapsed: 4.018316637s May 6 11:41:54.924: INFO: Pod "pod-service-account-8d4b3995-8f8e-11ea-b5fe-0242ac110017-dqxnf": Phase="Pending", Reason="", readiness=false. Elapsed: 6.02174824s May 6 11:41:56.928: INFO: Pod "pod-service-account-8d4b3995-8f8e-11ea-b5fe-0242ac110017-dqxnf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.02567736s STEP: Saw pod success May 6 11:41:56.928: INFO: Pod "pod-service-account-8d4b3995-8f8e-11ea-b5fe-0242ac110017-dqxnf" satisfied condition "success or failure" May 6 11:41:56.931: INFO: Trying to get logs from node hunter-worker2 pod pod-service-account-8d4b3995-8f8e-11ea-b5fe-0242ac110017-dqxnf container root-ca-test: STEP: delete the pod May 6 11:41:56.966: INFO: Waiting for pod pod-service-account-8d4b3995-8f8e-11ea-b5fe-0242ac110017-dqxnf to disappear May 6 11:41:56.980: INFO: Pod pod-service-account-8d4b3995-8f8e-11ea-b5fe-0242ac110017-dqxnf no longer exists STEP: Creating a pod to test consume service account namespace May 6 11:41:56.983: INFO: Waiting up to 5m0s for pod "pod-service-account-8d4b3995-8f8e-11ea-b5fe-0242ac110017-lnp4j" in namespace "e2e-tests-svcaccounts-6wrz4" to be "success or failure" May 6 11:41:56.986: INFO: Pod "pod-service-account-8d4b3995-8f8e-11ea-b5fe-0242ac110017-lnp4j": Phase="Pending", Reason="", readiness=false. Elapsed: 2.608396ms May 6 11:41:58.990: INFO: Pod "pod-service-account-8d4b3995-8f8e-11ea-b5fe-0242ac110017-lnp4j": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007016637s May 6 11:42:00.993: INFO: Pod "pod-service-account-8d4b3995-8f8e-11ea-b5fe-0242ac110017-lnp4j": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010154427s May 6 11:42:02.998: INFO: Pod "pod-service-account-8d4b3995-8f8e-11ea-b5fe-0242ac110017-lnp4j": Phase="Running", Reason="", readiness=false. Elapsed: 6.014618403s May 6 11:42:05.002: INFO: Pod "pod-service-account-8d4b3995-8f8e-11ea-b5fe-0242ac110017-lnp4j": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.018497926s STEP: Saw pod success May 6 11:42:05.002: INFO: Pod "pod-service-account-8d4b3995-8f8e-11ea-b5fe-0242ac110017-lnp4j" satisfied condition "success or failure" May 6 11:42:05.004: INFO: Trying to get logs from node hunter-worker pod pod-service-account-8d4b3995-8f8e-11ea-b5fe-0242ac110017-lnp4j container namespace-test: STEP: delete the pod May 6 11:42:05.034: INFO: Waiting for pod pod-service-account-8d4b3995-8f8e-11ea-b5fe-0242ac110017-lnp4j to disappear May 6 11:42:05.062: INFO: Pod pod-service-account-8d4b3995-8f8e-11ea-b5fe-0242ac110017-lnp4j no longer exists [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 11:42:05.062: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-svcaccounts-6wrz4" for this suite. May 6 11:42:11.079: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 11:42:11.156: INFO: namespace: e2e-tests-svcaccounts-6wrz4, resource: bindings, ignored listing per whitelist May 6 11:42:11.161: INFO: namespace e2e-tests-svcaccounts-6wrz4 deletion completed in 6.095095362s • [SLOW TEST:31.134 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:22 should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 11:42:11.161: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook May 6 11:42:19.374: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 6 11:42:19.398: INFO: Pod pod-with-prestop-exec-hook still exists May 6 11:42:21.398: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 6 11:42:21.403: INFO: Pod pod-with-prestop-exec-hook still exists May 6 11:42:23.398: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 6 11:42:23.403: INFO: Pod pod-with-prestop-exec-hook still exists May 6 11:42:25.398: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 6 11:42:25.440: INFO: Pod pod-with-prestop-exec-hook still exists May 6 11:42:27.398: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 6 11:42:27.402: INFO: Pod pod-with-prestop-exec-hook still exists May 6 11:42:29.398: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 6 11:42:29.403: INFO: Pod pod-with-prestop-exec-hook still exists May 6 11:42:31.398: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 6 11:42:31.404: INFO: Pod pod-with-prestop-exec-hook still exists May 6 11:42:33.398: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 6 11:42:33.402: INFO: Pod pod-with-prestop-exec-hook still exists May 6 11:42:35.398: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 6 11:42:35.572: INFO: Pod pod-with-prestop-exec-hook still exists May 6 11:42:37.398: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 6 11:42:37.402: INFO: Pod pod-with-prestop-exec-hook still exists May 6 11:42:39.398: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 6 11:42:39.407: INFO: Pod pod-with-prestop-exec-hook still exists May 6 11:42:41.398: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 6 11:42:41.402: INFO: Pod pod-with-prestop-exec-hook still exists May 6 11:42:43.398: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 6 11:42:43.403: INFO: Pod pod-with-prestop-exec-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 11:42:43.410: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-8w6f4" for this suite. May 6 11:43:05.426: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 11:43:05.510: INFO: namespace: e2e-tests-container-lifecycle-hook-8w6f4, resource: bindings, ignored listing per whitelist May 6 11:43:05.530: INFO: namespace e2e-tests-container-lifecycle-hook-8w6f4 deletion completed in 22.116934147s • [SLOW TEST:54.369 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40 should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 11:43:05.530: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-bff4b9af-8f8e-11ea-b5fe-0242ac110017 STEP: Creating a pod to test consume secrets May 6 11:43:05.696: INFO: Waiting up to 5m0s for pod "pod-secrets-bff582e0-8f8e-11ea-b5fe-0242ac110017" in namespace "e2e-tests-secrets-qj7wc" to be "success or failure" May 6 11:43:05.728: INFO: Pod "pod-secrets-bff582e0-8f8e-11ea-b5fe-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 32.437222ms May 6 11:43:07.732: INFO: Pod "pod-secrets-bff582e0-8f8e-11ea-b5fe-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.036334065s May 6 11:43:09.737: INFO: Pod "pod-secrets-bff582e0-8f8e-11ea-b5fe-0242ac110017": Phase="Running", Reason="", readiness=true. Elapsed: 4.041273061s May 6 11:43:11.741: INFO: Pod "pod-secrets-bff582e0-8f8e-11ea-b5fe-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.045218826s STEP: Saw pod success May 6 11:43:11.741: INFO: Pod "pod-secrets-bff582e0-8f8e-11ea-b5fe-0242ac110017" satisfied condition "success or failure" May 6 11:43:11.743: INFO: Trying to get logs from node hunter-worker2 pod pod-secrets-bff582e0-8f8e-11ea-b5fe-0242ac110017 container secret-env-test: STEP: delete the pod May 6 11:43:11.804: INFO: Waiting for pod pod-secrets-bff582e0-8f8e-11ea-b5fe-0242ac110017 to disappear May 6 11:43:11.932: INFO: Pod pod-secrets-bff582e0-8f8e-11ea-b5fe-0242ac110017 no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 11:43:11.932: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-qj7wc" for this suite. May 6 11:43:19.949: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 11:43:20.027: INFO: namespace: e2e-tests-secrets-qj7wc, resource: bindings, ignored listing per whitelist May 6 11:43:20.027: INFO: namespace e2e-tests-secrets-qj7wc deletion completed in 8.091731073s • [SLOW TEST:14.497 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:32 should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 11:43:20.027: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65 [It] deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 6 11:43:20.178: INFO: Pod name rollover-pod: Found 0 pods out of 1 May 6 11:43:25.183: INFO: Pod name rollover-pod: Found 1 pods out of 1 STEP: ensuring each pod is running May 6 11:43:25.183: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready May 6 11:43:27.187: INFO: Creating deployment "test-rollover-deployment" May 6 11:43:27.209: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations May 6 11:43:29.228: INFO: Check revision of new replica set for deployment "test-rollover-deployment" May 6 11:43:29.234: INFO: Ensure that both replica sets have 1 created replica May 6 11:43:29.239: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update May 6 11:43:29.246: INFO: Updating deployment test-rollover-deployment May 6 11:43:29.246: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller May 6 11:43:31.274: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 May 6 11:43:31.280: INFO: Make sure deployment "test-rollover-deployment" is complete May 6 11:43:31.286: INFO: all replica sets need to contain the pod-template-hash label May 6 11:43:31.286: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724362207, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724362207, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724362209, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724362207, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} May 6 11:43:33.308: INFO: all replica sets need to contain the pod-template-hash label May 6 11:43:33.308: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724362207, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724362207, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724362209, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724362207, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} May 6 11:43:35.293: INFO: all replica sets need to contain the pod-template-hash label May 6 11:43:35.294: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724362207, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724362207, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724362213, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724362207, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} May 6 11:43:37.294: INFO: all replica sets need to contain the pod-template-hash label May 6 11:43:37.294: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724362207, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724362207, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724362213, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724362207, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} May 6 11:43:39.293: INFO: all replica sets need to contain the pod-template-hash label May 6 11:43:39.293: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724362207, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724362207, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724362213, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724362207, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} May 6 11:43:41.293: INFO: all replica sets need to contain the pod-template-hash label May 6 11:43:41.293: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724362207, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724362207, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724362213, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724362207, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} May 6 11:43:43.294: INFO: all replica sets need to contain the pod-template-hash label May 6 11:43:43.294: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724362207, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724362207, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724362213, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724362207, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} May 6 11:43:45.293: INFO: May 6 11:43:45.294: INFO: Ensure that both old replica sets have no replicas [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59 May 6 11:43:45.301: INFO: Deployment "test-rollover-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment,GenerateName:,Namespace:e2e-tests-deployment-wm7qz,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-wm7qz/deployments/test-rollover-deployment,UID:ccc61d95-8f8e-11ea-99e8-0242ac110002,ResourceVersion:9037525,Generation:2,CreationTimestamp:2020-05-06 11:43:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-05-06 11:43:27 +0000 UTC 2020-05-06 11:43:27 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-05-06 11:43:43 +0000 UTC 2020-05-06 11:43:27 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rollover-deployment-5b8479fdb6" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},} May 6 11:43:45.304: INFO: New ReplicaSet "test-rollover-deployment-5b8479fdb6" of Deployment "test-rollover-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-5b8479fdb6,GenerateName:,Namespace:e2e-tests-deployment-wm7qz,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-wm7qz/replicasets/test-rollover-deployment-5b8479fdb6,UID:ce005062-8f8e-11ea-99e8-0242ac110002,ResourceVersion:9037516,Generation:2,CreationTimestamp:2020-05-06 11:43:29 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment ccc61d95-8f8e-11ea-99e8-0242ac110002 0xc00295be37 0xc00295be38}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} May 6 11:43:45.304: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": May 6 11:43:45.304: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-controller,GenerateName:,Namespace:e2e-tests-deployment-wm7qz,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-wm7qz/replicasets/test-rollover-controller,UID:c8960a4c-8f8e-11ea-99e8-0242ac110002,ResourceVersion:9037524,Generation:2,CreationTimestamp:2020-05-06 11:43:20 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment ccc61d95-8f8e-11ea-99e8-0242ac110002 0xc00295bc27 0xc00295bc28}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} May 6 11:43:45.305: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-58494b7559,GenerateName:,Namespace:e2e-tests-deployment-wm7qz,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-wm7qz/replicasets/test-rollover-deployment-58494b7559,UID:cccac024-8f8e-11ea-99e8-0242ac110002,ResourceVersion:9037482,Generation:2,CreationTimestamp:2020-05-06 11:43:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 58494b7559,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment ccc61d95-8f8e-11ea-99e8-0242ac110002 0xc00295bd37 0xc00295bd38}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 58494b7559,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 58494b7559,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} May 6 11:43:45.308: INFO: Pod "test-rollover-deployment-5b8479fdb6-hmvtw" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-5b8479fdb6-hmvtw,GenerateName:test-rollover-deployment-5b8479fdb6-,Namespace:e2e-tests-deployment-wm7qz,SelfLink:/api/v1/namespaces/e2e-tests-deployment-wm7qz/pods/test-rollover-deployment-5b8479fdb6-hmvtw,UID:ce15640a-8f8e-11ea-99e8-0242ac110002,ResourceVersion:9037494,Generation:0,CreationTimestamp:2020-05-06 11:43:29 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rollover-deployment-5b8479fdb6 ce005062-8f8e-11ea-99e8-0242ac110002 0xc0025fd697 0xc0025fd698}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-wgzlw {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-wgzlw,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-wgzlw true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0025fd730} {node.kubernetes.io/unreachable Exists NoExecute 0xc0025fd750}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 11:43:29 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 11:43:33 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 11:43:33 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 11:43:29 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.4,PodIP:10.244.2.147,StartTime:2020-05-06 11:43:29 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-05-06 11:43:32 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 containerd://8023ed077d943defe2bc363f0f2a2db8263a9bd2f1302f1ab373c6d0e178fc79}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 11:43:45.309: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-deployment-wm7qz" for this suite. May 6 11:43:53.418: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 11:43:53.476: INFO: namespace: e2e-tests-deployment-wm7qz, resource: bindings, ignored listing per whitelist May 6 11:43:53.540: INFO: namespace e2e-tests-deployment-wm7qz deletion completed in 8.228402046s • [SLOW TEST:33.513 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 11:43:53.541: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 6 11:43:53.632: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 11:43:57.751: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-6ztvn" for this suite. May 6 11:44:36.032: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 11:44:36.056: INFO: namespace: e2e-tests-pods-6ztvn, resource: bindings, ignored listing per whitelist May 6 11:44:36.106: INFO: namespace e2e-tests-pods-6ztvn deletion completed in 38.350775587s • [SLOW TEST:42.566 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 11:44:36.106: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should not write to root filesystem [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 11:44:40.960: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubelet-test-t4k48" for this suite. May 6 11:45:23.000: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 11:45:23.015: INFO: namespace: e2e-tests-kubelet-test-t4k48, resource: bindings, ignored listing per whitelist May 6 11:45:23.078: INFO: namespace e2e-tests-kubelet-test-t4k48 deletion completed in 42.115114353s • [SLOW TEST:46.972 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when scheduling a read only busybox container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:186 should not write to root filesystem [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 11:45:23.078: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating secret e2e-tests-secrets-ndj2x/secret-test-11e85f6f-8f8f-11ea-b5fe-0242ac110017 STEP: Creating a pod to test consume secrets May 6 11:45:23.244: INFO: Waiting up to 5m0s for pod "pod-configmaps-11f12b97-8f8f-11ea-b5fe-0242ac110017" in namespace "e2e-tests-secrets-ndj2x" to be "success or failure" May 6 11:45:23.248: INFO: Pod "pod-configmaps-11f12b97-8f8f-11ea-b5fe-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 3.529302ms May 6 11:45:25.423: INFO: Pod "pod-configmaps-11f12b97-8f8f-11ea-b5fe-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.178208154s May 6 11:45:27.426: INFO: Pod "pod-configmaps-11f12b97-8f8f-11ea-b5fe-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.181897545s STEP: Saw pod success May 6 11:45:27.426: INFO: Pod "pod-configmaps-11f12b97-8f8f-11ea-b5fe-0242ac110017" satisfied condition "success or failure" May 6 11:45:27.429: INFO: Trying to get logs from node hunter-worker pod pod-configmaps-11f12b97-8f8f-11ea-b5fe-0242ac110017 container env-test: STEP: delete the pod May 6 11:45:27.474: INFO: Waiting for pod pod-configmaps-11f12b97-8f8f-11ea-b5fe-0242ac110017 to disappear May 6 11:45:27.482: INFO: Pod pod-configmaps-11f12b97-8f8f-11ea-b5fe-0242ac110017 no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 11:45:27.482: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-ndj2x" for this suite. May 6 11:45:33.497: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 11:45:33.560: INFO: namespace: e2e-tests-secrets-ndj2x, resource: bindings, ignored listing per whitelist May 6 11:45:33.577: INFO: namespace e2e-tests-secrets-ndj2x deletion completed in 6.092426614s • [SLOW TEST:10.498 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:32 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 11:45:33.577: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook May 6 11:45:41.780: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 6 11:45:41.792: INFO: Pod pod-with-prestop-http-hook still exists May 6 11:45:43.792: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 6 11:45:43.795: INFO: Pod pod-with-prestop-http-hook still exists May 6 11:45:45.792: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 6 11:45:45.796: INFO: Pod pod-with-prestop-http-hook still exists May 6 11:45:47.792: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 6 11:45:47.795: INFO: Pod pod-with-prestop-http-hook still exists May 6 11:45:49.792: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 6 11:45:49.796: INFO: Pod pod-with-prestop-http-hook still exists May 6 11:45:51.792: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 6 11:45:51.796: INFO: Pod pod-with-prestop-http-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 11:45:51.803: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-dxjmk" for this suite. May 6 11:46:15.831: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 11:46:15.883: INFO: namespace: e2e-tests-container-lifecycle-hook-dxjmk, resource: bindings, ignored listing per whitelist May 6 11:46:15.924: INFO: namespace e2e-tests-container-lifecycle-hook-dxjmk deletion completed in 24.109961786s • [SLOW TEST:42.347 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40 should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 11:46:15.925: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74 STEP: Creating service test in namespace e2e-tests-statefulset-hfcxp [It] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a new StatefulSet May 6 11:46:16.089: INFO: Found 0 stateful pods, waiting for 3 May 6 11:46:26.094: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 6 11:46:26.094: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 6 11:46:26.094: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false May 6 11:46:36.095: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 6 11:46:36.095: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 6 11:46:36.095: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true May 6 11:46:36.106: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-hfcxp ss2-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' May 6 11:46:36.404: INFO: stderr: "I0506 11:46:36.262238 1967 log.go:172] (0xc00015c840) (0xc0007ab400) Create stream\nI0506 11:46:36.262315 1967 log.go:172] (0xc00015c840) (0xc0007ab400) Stream added, broadcasting: 1\nI0506 11:46:36.265727 1967 log.go:172] (0xc00015c840) Reply frame received for 1\nI0506 11:46:36.265771 1967 log.go:172] (0xc00015c840) (0xc0007ab4a0) Create stream\nI0506 11:46:36.265780 1967 log.go:172] (0xc00015c840) (0xc0007ab4a0) Stream added, broadcasting: 3\nI0506 11:46:36.266974 1967 log.go:172] (0xc00015c840) Reply frame received for 3\nI0506 11:46:36.267030 1967 log.go:172] (0xc00015c840) (0xc00079e000) Create stream\nI0506 11:46:36.267057 1967 log.go:172] (0xc00015c840) (0xc00079e000) Stream added, broadcasting: 5\nI0506 11:46:36.268018 1967 log.go:172] (0xc00015c840) Reply frame received for 5\nI0506 11:46:36.394347 1967 log.go:172] (0xc00015c840) Data frame received for 5\nI0506 11:46:36.394399 1967 log.go:172] (0xc00079e000) (5) Data frame handling\nI0506 11:46:36.394436 1967 log.go:172] (0xc00015c840) Data frame received for 3\nI0506 11:46:36.394453 1967 log.go:172] (0xc0007ab4a0) (3) Data frame handling\nI0506 11:46:36.394475 1967 log.go:172] (0xc0007ab4a0) (3) Data frame sent\nI0506 11:46:36.394490 1967 log.go:172] (0xc00015c840) Data frame received for 3\nI0506 11:46:36.394592 1967 log.go:172] (0xc0007ab4a0) (3) Data frame handling\nI0506 11:46:36.396509 1967 log.go:172] (0xc00015c840) Data frame received for 1\nI0506 11:46:36.396540 1967 log.go:172] (0xc0007ab400) (1) Data frame handling\nI0506 11:46:36.396559 1967 log.go:172] (0xc0007ab400) (1) Data frame sent\nI0506 11:46:36.396665 1967 log.go:172] (0xc00015c840) (0xc0007ab400) Stream removed, broadcasting: 1\nI0506 11:46:36.396907 1967 log.go:172] (0xc00015c840) (0xc0007ab400) Stream removed, broadcasting: 1\nI0506 11:46:36.396938 1967 log.go:172] (0xc00015c840) (0xc0007ab4a0) Stream removed, broadcasting: 3\nI0506 11:46:36.397535 1967 log.go:172] (0xc00015c840) (0xc00079e000) Stream removed, broadcasting: 5\n" May 6 11:46:36.404: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" May 6 11:46:36.404: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' STEP: Updating StatefulSet template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine May 6 11:46:46.437: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Updating Pods in reverse ordinal order May 6 11:46:56.480: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-hfcxp ss2-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 6 11:46:56.740: INFO: stderr: "I0506 11:46:56.630020 1990 log.go:172] (0xc000154630) (0xc000758640) Create stream\nI0506 11:46:56.630090 1990 log.go:172] (0xc000154630) (0xc000758640) Stream added, broadcasting: 1\nI0506 11:46:56.633286 1990 log.go:172] (0xc000154630) Reply frame received for 1\nI0506 11:46:56.633364 1990 log.go:172] (0xc000154630) (0xc0007586e0) Create stream\nI0506 11:46:56.633400 1990 log.go:172] (0xc000154630) (0xc0007586e0) Stream added, broadcasting: 3\nI0506 11:46:56.634692 1990 log.go:172] (0xc000154630) Reply frame received for 3\nI0506 11:46:56.634742 1990 log.go:172] (0xc000154630) (0xc000652dc0) Create stream\nI0506 11:46:56.634757 1990 log.go:172] (0xc000154630) (0xc000652dc0) Stream added, broadcasting: 5\nI0506 11:46:56.635775 1990 log.go:172] (0xc000154630) Reply frame received for 5\nI0506 11:46:56.734281 1990 log.go:172] (0xc000154630) Data frame received for 5\nI0506 11:46:56.734326 1990 log.go:172] (0xc000154630) Data frame received for 3\nI0506 11:46:56.734381 1990 log.go:172] (0xc0007586e0) (3) Data frame handling\nI0506 11:46:56.734408 1990 log.go:172] (0xc0007586e0) (3) Data frame sent\nI0506 11:46:56.734425 1990 log.go:172] (0xc000154630) Data frame received for 3\nI0506 11:46:56.734441 1990 log.go:172] (0xc0007586e0) (3) Data frame handling\nI0506 11:46:56.734469 1990 log.go:172] (0xc000652dc0) (5) Data frame handling\nI0506 11:46:56.736212 1990 log.go:172] (0xc000154630) Data frame received for 1\nI0506 11:46:56.736238 1990 log.go:172] (0xc000758640) (1) Data frame handling\nI0506 11:46:56.736251 1990 log.go:172] (0xc000758640) (1) Data frame sent\nI0506 11:46:56.736260 1990 log.go:172] (0xc000154630) (0xc000758640) Stream removed, broadcasting: 1\nI0506 11:46:56.736270 1990 log.go:172] (0xc000154630) Go away received\nI0506 11:46:56.736570 1990 log.go:172] (0xc000154630) (0xc000758640) Stream removed, broadcasting: 1\nI0506 11:46:56.736599 1990 log.go:172] (0xc000154630) (0xc0007586e0) Stream removed, broadcasting: 3\nI0506 11:46:56.736618 1990 log.go:172] (0xc000154630) (0xc000652dc0) Stream removed, broadcasting: 5\n" May 6 11:46:56.740: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" May 6 11:46:56.740: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' May 6 11:47:06.776: INFO: Waiting for StatefulSet e2e-tests-statefulset-hfcxp/ss2 to complete update May 6 11:47:06.776: INFO: Waiting for Pod e2e-tests-statefulset-hfcxp/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c May 6 11:47:06.776: INFO: Waiting for Pod e2e-tests-statefulset-hfcxp/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c May 6 11:47:06.776: INFO: Waiting for Pod e2e-tests-statefulset-hfcxp/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c May 6 11:47:16.784: INFO: Waiting for StatefulSet e2e-tests-statefulset-hfcxp/ss2 to complete update May 6 11:47:16.784: INFO: Waiting for Pod e2e-tests-statefulset-hfcxp/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c May 6 11:47:16.784: INFO: Waiting for Pod e2e-tests-statefulset-hfcxp/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c May 6 11:47:26.796: INFO: Waiting for StatefulSet e2e-tests-statefulset-hfcxp/ss2 to complete update May 6 11:47:26.796: INFO: Waiting for Pod e2e-tests-statefulset-hfcxp/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c STEP: Rolling back to a previous revision May 6 11:47:36.785: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-hfcxp ss2-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' May 6 11:47:37.078: INFO: stderr: "I0506 11:47:36.923040 2012 log.go:172] (0xc0006fa370) (0xc000738640) Create stream\nI0506 11:47:36.923119 2012 log.go:172] (0xc0006fa370) (0xc000738640) Stream added, broadcasting: 1\nI0506 11:47:36.926119 2012 log.go:172] (0xc0006fa370) Reply frame received for 1\nI0506 11:47:36.926176 2012 log.go:172] (0xc0006fa370) (0xc000600be0) Create stream\nI0506 11:47:36.926203 2012 log.go:172] (0xc0006fa370) (0xc000600be0) Stream added, broadcasting: 3\nI0506 11:47:36.927355 2012 log.go:172] (0xc0006fa370) Reply frame received for 3\nI0506 11:47:36.927403 2012 log.go:172] (0xc0006fa370) (0xc00069a000) Create stream\nI0506 11:47:36.927421 2012 log.go:172] (0xc0006fa370) (0xc00069a000) Stream added, broadcasting: 5\nI0506 11:47:36.928580 2012 log.go:172] (0xc0006fa370) Reply frame received for 5\nI0506 11:47:37.071656 2012 log.go:172] (0xc0006fa370) Data frame received for 3\nI0506 11:47:37.071691 2012 log.go:172] (0xc000600be0) (3) Data frame handling\nI0506 11:47:37.071709 2012 log.go:172] (0xc000600be0) (3) Data frame sent\nI0506 11:47:37.071721 2012 log.go:172] (0xc0006fa370) Data frame received for 3\nI0506 11:47:37.071733 2012 log.go:172] (0xc000600be0) (3) Data frame handling\nI0506 11:47:37.071770 2012 log.go:172] (0xc0006fa370) Data frame received for 5\nI0506 11:47:37.071813 2012 log.go:172] (0xc00069a000) (5) Data frame handling\nI0506 11:47:37.074092 2012 log.go:172] (0xc0006fa370) Data frame received for 1\nI0506 11:47:37.074127 2012 log.go:172] (0xc000738640) (1) Data frame handling\nI0506 11:47:37.074149 2012 log.go:172] (0xc000738640) (1) Data frame sent\nI0506 11:47:37.074182 2012 log.go:172] (0xc0006fa370) (0xc000738640) Stream removed, broadcasting: 1\nI0506 11:47:37.074227 2012 log.go:172] (0xc0006fa370) Go away received\nI0506 11:47:37.074380 2012 log.go:172] (0xc0006fa370) (0xc000738640) Stream removed, broadcasting: 1\nI0506 11:47:37.074398 2012 log.go:172] (0xc0006fa370) (0xc000600be0) Stream removed, broadcasting: 3\nI0506 11:47:37.074411 2012 log.go:172] (0xc0006fa370) (0xc00069a000) Stream removed, broadcasting: 5\n" May 6 11:47:37.079: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" May 6 11:47:37.079: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' May 6 11:47:47.112: INFO: Updating stateful set ss2 STEP: Rolling back update in reverse ordinal order May 6 11:47:57.165: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-hfcxp ss2-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 6 11:47:57.406: INFO: stderr: "I0506 11:47:57.292828 2036 log.go:172] (0xc00013a840) (0xc00065d540) Create stream\nI0506 11:47:57.292917 2036 log.go:172] (0xc00013a840) (0xc00065d540) Stream added, broadcasting: 1\nI0506 11:47:57.295414 2036 log.go:172] (0xc00013a840) Reply frame received for 1\nI0506 11:47:57.295479 2036 log.go:172] (0xc00013a840) (0xc000766000) Create stream\nI0506 11:47:57.295497 2036 log.go:172] (0xc00013a840) (0xc000766000) Stream added, broadcasting: 3\nI0506 11:47:57.296549 2036 log.go:172] (0xc00013a840) Reply frame received for 3\nI0506 11:47:57.296608 2036 log.go:172] (0xc00013a840) (0xc000620000) Create stream\nI0506 11:47:57.296630 2036 log.go:172] (0xc00013a840) (0xc000620000) Stream added, broadcasting: 5\nI0506 11:47:57.297766 2036 log.go:172] (0xc00013a840) Reply frame received for 5\nI0506 11:47:57.399797 2036 log.go:172] (0xc00013a840) Data frame received for 5\nI0506 11:47:57.399823 2036 log.go:172] (0xc000620000) (5) Data frame handling\nI0506 11:47:57.399844 2036 log.go:172] (0xc00013a840) Data frame received for 3\nI0506 11:47:57.399849 2036 log.go:172] (0xc000766000) (3) Data frame handling\nI0506 11:47:57.399854 2036 log.go:172] (0xc000766000) (3) Data frame sent\nI0506 11:47:57.399859 2036 log.go:172] (0xc00013a840) Data frame received for 3\nI0506 11:47:57.399862 2036 log.go:172] (0xc000766000) (3) Data frame handling\nI0506 11:47:57.401766 2036 log.go:172] (0xc00013a840) Data frame received for 1\nI0506 11:47:57.401784 2036 log.go:172] (0xc00065d540) (1) Data frame handling\nI0506 11:47:57.401809 2036 log.go:172] (0xc00065d540) (1) Data frame sent\nI0506 11:47:57.401827 2036 log.go:172] (0xc00013a840) (0xc00065d540) Stream removed, broadcasting: 1\nI0506 11:47:57.401991 2036 log.go:172] (0xc00013a840) (0xc00065d540) Stream removed, broadcasting: 1\nI0506 11:47:57.402023 2036 log.go:172] (0xc00013a840) Go away received\nI0506 11:47:57.402059 2036 log.go:172] (0xc00013a840) (0xc000766000) Stream removed, broadcasting: 3\nI0506 11:47:57.402090 2036 log.go:172] (0xc00013a840) (0xc000620000) Stream removed, broadcasting: 5\n" May 6 11:47:57.406: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" May 6 11:47:57.406: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' May 6 11:48:07.466: INFO: Waiting for StatefulSet e2e-tests-statefulset-hfcxp/ss2 to complete update May 6 11:48:07.466: INFO: Waiting for Pod e2e-tests-statefulset-hfcxp/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd May 6 11:48:07.466: INFO: Waiting for Pod e2e-tests-statefulset-hfcxp/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd May 6 11:48:07.466: INFO: Waiting for Pod e2e-tests-statefulset-hfcxp/ss2-2 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd May 6 11:48:17.472: INFO: Waiting for StatefulSet e2e-tests-statefulset-hfcxp/ss2 to complete update May 6 11:48:17.472: INFO: Waiting for Pod e2e-tests-statefulset-hfcxp/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd May 6 11:48:17.472: INFO: Waiting for Pod e2e-tests-statefulset-hfcxp/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd May 6 11:48:27.527: INFO: Waiting for StatefulSet e2e-tests-statefulset-hfcxp/ss2 to complete update [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85 May 6 11:48:37.473: INFO: Deleting all statefulset in ns e2e-tests-statefulset-hfcxp May 6 11:48:37.476: INFO: Scaling statefulset ss2 to 0 May 6 11:48:57.490: INFO: Waiting for statefulset status.replicas updated to 0 May 6 11:48:57.493: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 11:48:57.503: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-statefulset-hfcxp" for this suite. May 6 11:49:05.530: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 11:49:05.560: INFO: namespace: e2e-tests-statefulset-hfcxp, resource: bindings, ignored listing per whitelist May 6 11:49:05.611: INFO: namespace e2e-tests-statefulset-hfcxp deletion completed in 8.104442912s • [SLOW TEST:169.687 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl label should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 11:49:05.612: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1052 STEP: creating the pod May 6 11:49:05.701: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-b9p6z' May 6 11:49:08.186: INFO: stderr: "" May 6 11:49:08.186: INFO: stdout: "pod/pause created\n" May 6 11:49:08.186: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] May 6 11:49:08.186: INFO: Waiting up to 5m0s for pod "pause" in namespace "e2e-tests-kubectl-b9p6z" to be "running and ready" May 6 11:49:08.195: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 8.825394ms May 6 11:49:10.198: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012143924s May 6 11:49:12.202: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 4.015808096s May 6 11:49:12.202: INFO: Pod "pause" satisfied condition "running and ready" May 6 11:49:12.202: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] [It] should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: adding the label testing-label with value testing-label-value to a pod May 6 11:49:12.202: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=e2e-tests-kubectl-b9p6z' May 6 11:49:12.316: INFO: stderr: "" May 6 11:49:12.316: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod has the label testing-label with the value testing-label-value May 6 11:49:12.316: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=e2e-tests-kubectl-b9p6z' May 6 11:49:12.422: INFO: stderr: "" May 6 11:49:12.422: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 4s testing-label-value\n" STEP: removing the label testing-label of a pod May 6 11:49:12.422: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=e2e-tests-kubectl-b9p6z' May 6 11:49:12.523: INFO: stderr: "" May 6 11:49:12.524: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod doesn't have the label testing-label May 6 11:49:12.524: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=e2e-tests-kubectl-b9p6z' May 6 11:49:12.625: INFO: stderr: "" May 6 11:49:12.625: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 4s \n" [AfterEach] [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1059 STEP: using delete to clean up resources May 6 11:49:12.625: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-b9p6z' May 6 11:49:12.750: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 6 11:49:12.750: INFO: stdout: "pod \"pause\" force deleted\n" May 6 11:49:12.750: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=e2e-tests-kubectl-b9p6z' May 6 11:49:12.868: INFO: stderr: "No resources found.\n" May 6 11:49:12.868: INFO: stdout: "" May 6 11:49:12.868: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=e2e-tests-kubectl-b9p6z -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 6 11:49:12.975: INFO: stderr: "" May 6 11:49:12.975: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 11:49:12.975: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-b9p6z" for this suite. May 6 11:49:19.180: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 11:49:19.262: INFO: namespace: e2e-tests-kubectl-b9p6z, resource: bindings, ignored listing per whitelist May 6 11:49:19.264: INFO: namespace e2e-tests-kubectl-b9p6z deletion completed in 6.285110675s • [SLOW TEST:13.653 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Guestbook application should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 11:49:19.265: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating all guestbook components May 6 11:49:19.430: INFO: apiVersion: v1 kind: Service metadata: name: redis-slave labels: app: redis role: slave tier: backend spec: ports: - port: 6379 selector: app: redis role: slave tier: backend May 6 11:49:19.430: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-8hgnz' May 6 11:49:19.751: INFO: stderr: "" May 6 11:49:19.751: INFO: stdout: "service/redis-slave created\n" May 6 11:49:19.752: INFO: apiVersion: v1 kind: Service metadata: name: redis-master labels: app: redis role: master tier: backend spec: ports: - port: 6379 targetPort: 6379 selector: app: redis role: master tier: backend May 6 11:49:19.752: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-8hgnz' May 6 11:49:20.042: INFO: stderr: "" May 6 11:49:20.042: INFO: stdout: "service/redis-master created\n" May 6 11:49:20.043: INFO: apiVersion: v1 kind: Service metadata: name: frontend labels: app: guestbook tier: frontend spec: # if your cluster supports it, uncomment the following to automatically create # an external load-balanced IP for the frontend service. # type: LoadBalancer ports: - port: 80 selector: app: guestbook tier: frontend May 6 11:49:20.043: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-8hgnz' May 6 11:49:20.347: INFO: stderr: "" May 6 11:49:20.347: INFO: stdout: "service/frontend created\n" May 6 11:49:20.347: INFO: apiVersion: extensions/v1beta1 kind: Deployment metadata: name: frontend spec: replicas: 3 template: metadata: labels: app: guestbook tier: frontend spec: containers: - name: php-redis image: gcr.io/google-samples/gb-frontend:v6 resources: requests: cpu: 100m memory: 100Mi env: - name: GET_HOSTS_FROM value: dns # If your cluster config does not include a dns service, then to # instead access environment variables to find service host # info, comment out the 'value: dns' line above, and uncomment the # line below: # value: env ports: - containerPort: 80 May 6 11:49:20.348: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-8hgnz' May 6 11:49:20.586: INFO: stderr: "" May 6 11:49:20.586: INFO: stdout: "deployment.extensions/frontend created\n" May 6 11:49:20.586: INFO: apiVersion: extensions/v1beta1 kind: Deployment metadata: name: redis-master spec: replicas: 1 template: metadata: labels: app: redis role: master tier: backend spec: containers: - name: master image: gcr.io/kubernetes-e2e-test-images/redis:1.0 resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 May 6 11:49:20.586: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-8hgnz' May 6 11:49:20.867: INFO: stderr: "" May 6 11:49:20.867: INFO: stdout: "deployment.extensions/redis-master created\n" May 6 11:49:20.868: INFO: apiVersion: extensions/v1beta1 kind: Deployment metadata: name: redis-slave spec: replicas: 2 template: metadata: labels: app: redis role: slave tier: backend spec: containers: - name: slave image: gcr.io/google-samples/gb-redisslave:v3 resources: requests: cpu: 100m memory: 100Mi env: - name: GET_HOSTS_FROM value: dns # If your cluster config does not include a dns service, then to # instead access an environment variable to find the master # service's host, comment out the 'value: dns' line above, and # uncomment the line below: # value: env ports: - containerPort: 6379 May 6 11:49:20.868: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-8hgnz' May 6 11:49:21.157: INFO: stderr: "" May 6 11:49:21.157: INFO: stdout: "deployment.extensions/redis-slave created\n" STEP: validating guestbook app May 6 11:49:21.157: INFO: Waiting for all frontend pods to be Running. May 6 11:49:31.208: INFO: Waiting for frontend to serve content. May 6 11:49:31.225: INFO: Trying to add a new entry to the guestbook. May 6 11:49:31.241: INFO: Verifying that added entry can be retrieved. STEP: using delete to clean up resources May 6 11:49:31.254: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-8hgnz' May 6 11:49:31.397: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 6 11:49:31.397: INFO: stdout: "service \"redis-slave\" force deleted\n" STEP: using delete to clean up resources May 6 11:49:31.397: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-8hgnz' May 6 11:49:31.569: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 6 11:49:31.569: INFO: stdout: "service \"redis-master\" force deleted\n" STEP: using delete to clean up resources May 6 11:49:31.570: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-8hgnz' May 6 11:49:31.699: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 6 11:49:31.699: INFO: stdout: "service \"frontend\" force deleted\n" STEP: using delete to clean up resources May 6 11:49:31.699: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-8hgnz' May 6 11:49:31.805: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 6 11:49:31.805: INFO: stdout: "deployment.extensions \"frontend\" force deleted\n" STEP: using delete to clean up resources May 6 11:49:31.806: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-8hgnz' May 6 11:49:31.945: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 6 11:49:31.945: INFO: stdout: "deployment.extensions \"redis-master\" force deleted\n" STEP: using delete to clean up resources May 6 11:49:31.945: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-8hgnz' May 6 11:49:32.404: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 6 11:49:32.404: INFO: stdout: "deployment.extensions \"redis-slave\" force deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 11:49:32.404: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-8hgnz" for this suite. May 6 11:50:12.640: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 11:50:12.670: INFO: namespace: e2e-tests-kubectl-8hgnz, resource: bindings, ignored listing per whitelist May 6 11:50:12.720: INFO: namespace e2e-tests-kubectl-8hgnz deletion completed in 40.226187512s • [SLOW TEST:53.456 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Guestbook application /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 11:50:12.721: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a watch on configmaps with a certain label STEP: creating a new configmap STEP: modifying the configmap once STEP: changing the label value of the configmap STEP: Expecting to observe a delete notification for the watched object May 6 11:50:12.858: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-v5gb7,SelfLink:/api/v1/namespaces/e2e-tests-watch-v5gb7/configmaps/e2e-watch-test-label-changed,UID:be88e061-8f8f-11ea-99e8-0242ac110002,ResourceVersion:9038988,Generation:0,CreationTimestamp:2020-05-06 11:50:12 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} May 6 11:50:12.859: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-v5gb7,SelfLink:/api/v1/namespaces/e2e-tests-watch-v5gb7/configmaps/e2e-watch-test-label-changed,UID:be88e061-8f8f-11ea-99e8-0242ac110002,ResourceVersion:9038989,Generation:0,CreationTimestamp:2020-05-06 11:50:12 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} May 6 11:50:12.859: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-v5gb7,SelfLink:/api/v1/namespaces/e2e-tests-watch-v5gb7/configmaps/e2e-watch-test-label-changed,UID:be88e061-8f8f-11ea-99e8-0242ac110002,ResourceVersion:9038990,Generation:0,CreationTimestamp:2020-05-06 11:50:12 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements STEP: changing the label value of the configmap back STEP: modifying the configmap a third time STEP: deleting the configmap STEP: Expecting to observe an add notification for the watched object when the label value was restored May 6 11:50:22.927: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-v5gb7,SelfLink:/api/v1/namespaces/e2e-tests-watch-v5gb7/configmaps/e2e-watch-test-label-changed,UID:be88e061-8f8f-11ea-99e8-0242ac110002,ResourceVersion:9039011,Generation:0,CreationTimestamp:2020-05-06 11:50:12 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} May 6 11:50:22.927: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-v5gb7,SelfLink:/api/v1/namespaces/e2e-tests-watch-v5gb7/configmaps/e2e-watch-test-label-changed,UID:be88e061-8f8f-11ea-99e8-0242ac110002,ResourceVersion:9039012,Generation:0,CreationTimestamp:2020-05-06 11:50:12 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} May 6 11:50:22.927: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-v5gb7,SelfLink:/api/v1/namespaces/e2e-tests-watch-v5gb7/configmaps/e2e-watch-test-label-changed,UID:be88e061-8f8f-11ea-99e8-0242ac110002,ResourceVersion:9039013,Generation:0,CreationTimestamp:2020-05-06 11:50:12 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 11:50:22.927: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-watch-v5gb7" for this suite. May 6 11:50:28.968: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 11:50:28.991: INFO: namespace: e2e-tests-watch-v5gb7, resource: bindings, ignored listing per whitelist May 6 11:50:29.051: INFO: namespace e2e-tests-watch-v5gb7 deletion completed in 6.096849208s • [SLOW TEST:16.330 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 11:50:29.051: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook May 6 11:50:37.280: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 6 11:50:37.303: INFO: Pod pod-with-poststart-http-hook still exists May 6 11:50:39.303: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 6 11:50:39.307: INFO: Pod pod-with-poststart-http-hook still exists May 6 11:50:41.303: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 6 11:50:41.307: INFO: Pod pod-with-poststart-http-hook still exists May 6 11:50:43.303: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 6 11:50:43.306: INFO: Pod pod-with-poststart-http-hook still exists May 6 11:50:45.303: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 6 11:50:45.308: INFO: Pod pod-with-poststart-http-hook still exists May 6 11:50:47.303: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 6 11:50:47.307: INFO: Pod pod-with-poststart-http-hook still exists May 6 11:50:49.303: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 6 11:50:49.307: INFO: Pod pod-with-poststart-http-hook still exists May 6 11:50:51.303: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 6 11:50:51.312: INFO: Pod pod-with-poststart-http-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 11:50:51.312: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-6m9hg" for this suite. May 6 11:51:13.326: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 11:51:13.357: INFO: namespace: e2e-tests-container-lifecycle-hook-6m9hg, resource: bindings, ignored listing per whitelist May 6 11:51:13.402: INFO: namespace e2e-tests-container-lifecycle-hook-6m9hg deletion completed in 22.087066275s • [SLOW TEST:44.351 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40 should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl api-versions should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 11:51:13.403: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: validating api versions May 6 11:51:13.496: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config api-versions' May 6 11:51:13.712: INFO: stderr: "" May 6 11:51:13.712: INFO: stdout: "admissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\napps/v1beta1\napps/v1beta2\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 11:51:13.712: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-jzh6s" for this suite. May 6 11:51:19.762: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 11:51:19.858: INFO: namespace: e2e-tests-kubectl-jzh6s, resource: bindings, ignored listing per whitelist May 6 11:51:19.874: INFO: namespace e2e-tests-kubectl-jzh6s deletion completed in 6.159002079s • [SLOW TEST:6.472 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl api-versions /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 11:51:19.875: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79 May 6 11:51:20.022: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 6 11:51:20.030: INFO: Waiting for terminating namespaces to be deleted... May 6 11:51:20.032: INFO: Logging pods the kubelet thinks is on node hunter-worker before test May 6 11:51:20.041: INFO: kube-proxy-szbng from kube-system started at 2020-03-15 18:23:11 +0000 UTC (1 container statuses recorded) May 6 11:51:20.041: INFO: Container kube-proxy ready: true, restart count 0 May 6 11:51:20.041: INFO: kindnet-54h7m from kube-system started at 2020-03-15 18:23:12 +0000 UTC (1 container statuses recorded) May 6 11:51:20.041: INFO: Container kindnet-cni ready: true, restart count 0 May 6 11:51:20.041: INFO: coredns-54ff9cd656-4h7lb from kube-system started at 2020-03-15 18:23:32 +0000 UTC (1 container statuses recorded) May 6 11:51:20.041: INFO: Container coredns ready: true, restart count 0 May 6 11:51:20.041: INFO: Logging pods the kubelet thinks is on node hunter-worker2 before test May 6 11:51:20.045: INFO: kindnet-mtqrs from kube-system started at 2020-03-15 18:23:12 +0000 UTC (1 container statuses recorded) May 6 11:51:20.045: INFO: Container kindnet-cni ready: true, restart count 0 May 6 11:51:20.045: INFO: coredns-54ff9cd656-8vrkk from kube-system started at 2020-03-15 18:23:32 +0000 UTC (1 container statuses recorded) May 6 11:51:20.045: INFO: Container coredns ready: true, restart count 0 May 6 11:51:20.045: INFO: kube-proxy-s52ll from kube-system started at 2020-03-15 18:23:12 +0000 UTC (1 container statuses recorded) May 6 11:51:20.045: INFO: Container kube-proxy ready: true, restart count 0 [It] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: verifying the node has the label node hunter-worker STEP: verifying the node has the label node hunter-worker2 May 6 11:51:20.162: INFO: Pod coredns-54ff9cd656-4h7lb requesting resource cpu=100m on Node hunter-worker May 6 11:51:20.162: INFO: Pod coredns-54ff9cd656-8vrkk requesting resource cpu=100m on Node hunter-worker2 May 6 11:51:20.162: INFO: Pod kindnet-54h7m requesting resource cpu=100m on Node hunter-worker May 6 11:51:20.162: INFO: Pod kindnet-mtqrs requesting resource cpu=100m on Node hunter-worker2 May 6 11:51:20.162: INFO: Pod kube-proxy-s52ll requesting resource cpu=0m on Node hunter-worker2 May 6 11:51:20.162: INFO: Pod kube-proxy-szbng requesting resource cpu=0m on Node hunter-worker STEP: Starting Pods to consume most of the cluster CPU. STEP: Creating another pod that requires unavailable amount of CPU. STEP: Considering event: Type = [Normal], Name = [filler-pod-e6b0328d-8f8f-11ea-b5fe-0242ac110017.160c6e26828c53ed], Reason = [Scheduled], Message = [Successfully assigned e2e-tests-sched-pred-2sbnw/filler-pod-e6b0328d-8f8f-11ea-b5fe-0242ac110017 to hunter-worker] STEP: Considering event: Type = [Normal], Name = [filler-pod-e6b0328d-8f8f-11ea-b5fe-0242ac110017.160c6e26cff4edb2], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-e6b0328d-8f8f-11ea-b5fe-0242ac110017.160c6e2733975764], Reason = [Created], Message = [Created container] STEP: Considering event: Type = [Normal], Name = [filler-pod-e6b0328d-8f8f-11ea-b5fe-0242ac110017.160c6e274950aec2], Reason = [Started], Message = [Started container] STEP: Considering event: Type = [Normal], Name = [filler-pod-e6b623ab-8f8f-11ea-b5fe-0242ac110017.160c6e268380bb17], Reason = [Scheduled], Message = [Successfully assigned e2e-tests-sched-pred-2sbnw/filler-pod-e6b623ab-8f8f-11ea-b5fe-0242ac110017 to hunter-worker2] STEP: Considering event: Type = [Normal], Name = [filler-pod-e6b623ab-8f8f-11ea-b5fe-0242ac110017.160c6e2706c722f1], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-e6b623ab-8f8f-11ea-b5fe-0242ac110017.160c6e2750d6f569], Reason = [Created], Message = [Created container] STEP: Considering event: Type = [Normal], Name = [filler-pod-e6b623ab-8f8f-11ea-b5fe-0242ac110017.160c6e276200e123], Reason = [Started], Message = [Started container] STEP: Considering event: Type = [Warning], Name = [additional-pod.160c6e2776540d78], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taints that the pod didn't tolerate, 2 Insufficient cpu.] STEP: removing the label node off the node hunter-worker STEP: verifying the node doesn't have the label node STEP: removing the label node off the node hunter-worker2 STEP: verifying the node doesn't have the label node [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 11:51:25.453: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-sched-pred-2sbnw" for this suite. May 6 11:51:31.479: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 11:51:31.524: INFO: namespace: e2e-tests-sched-pred-2sbnw, resource: bindings, ignored listing per whitelist May 6 11:51:31.538: INFO: namespace e2e-tests-sched-pred-2sbnw deletion completed in 6.068337047s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70 • [SLOW TEST:11.663 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22 validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 11:51:31.538: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 6 11:51:31.835: INFO: (0) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 8.175712ms) May 6 11:51:31.885: INFO: (1) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 49.478241ms) May 6 11:51:31.889: INFO: (2) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 4.05923ms) May 6 11:51:31.892: INFO: (3) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.866075ms) May 6 11:51:31.894: INFO: (4) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.487528ms) May 6 11:51:31.897: INFO: (5) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.501104ms) May 6 11:51:31.899: INFO: (6) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.364314ms) May 6 11:51:31.902: INFO: (7) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.490345ms) May 6 11:51:31.904: INFO: (8) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.483997ms) May 6 11:51:31.911: INFO: (9) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 6.289295ms) May 6 11:51:31.917: INFO: (10) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 6.610274ms) May 6 11:51:31.921: INFO: (11) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.583905ms) May 6 11:51:31.925: INFO: (12) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 4.062818ms) May 6 11:51:31.994: INFO: (13) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 68.49892ms) May 6 11:51:31.997: INFO: (14) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.734756ms) May 6 11:51:32.001: INFO: (15) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.645756ms) May 6 11:51:32.004: INFO: (16) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.004013ms) May 6 11:51:32.008: INFO: (17) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 4.202134ms) May 6 11:51:32.011: INFO: (18) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.892778ms) May 6 11:51:32.015: INFO: (19) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.862701ms) [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 11:51:32.015: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-proxy-9xr5s" for this suite. May 6 11:51:38.035: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 11:51:38.058: INFO: namespace: e2e-tests-proxy-9xr5s, resource: bindings, ignored listing per whitelist May 6 11:51:38.104: INFO: namespace e2e-tests-proxy-9xr5s deletion completed in 6.085962432s • [SLOW TEST:6.566 seconds] [sig-network] Proxy /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:56 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 11:51:38.104: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:295 [It] should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the initial replication controller May 6 11:51:38.221: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-g882p' May 6 11:51:38.503: INFO: stderr: "" May 6 11:51:38.503: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. May 6 11:51:38.503: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-g882p' May 6 11:51:38.630: INFO: stderr: "" May 6 11:51:38.630: INFO: stdout: "update-demo-nautilus-8nn9n update-demo-nautilus-jj646 " May 6 11:51:38.631: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8nn9n -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-g882p' May 6 11:51:38.740: INFO: stderr: "" May 6 11:51:38.740: INFO: stdout: "" May 6 11:51:38.741: INFO: update-demo-nautilus-8nn9n is created but not running May 6 11:51:43.741: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-g882p' May 6 11:51:43.845: INFO: stderr: "" May 6 11:51:43.845: INFO: stdout: "update-demo-nautilus-8nn9n update-demo-nautilus-jj646 " May 6 11:51:43.845: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8nn9n -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-g882p' May 6 11:51:43.932: INFO: stderr: "" May 6 11:51:43.932: INFO: stdout: "true" May 6 11:51:43.932: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8nn9n -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-g882p' May 6 11:51:44.022: INFO: stderr: "" May 6 11:51:44.022: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 6 11:51:44.022: INFO: validating pod update-demo-nautilus-8nn9n May 6 11:51:44.026: INFO: got data: { "image": "nautilus.jpg" } May 6 11:51:44.026: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 6 11:51:44.026: INFO: update-demo-nautilus-8nn9n is verified up and running May 6 11:51:44.026: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jj646 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-g882p' May 6 11:51:44.134: INFO: stderr: "" May 6 11:51:44.135: INFO: stdout: "true" May 6 11:51:44.135: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jj646 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-g882p' May 6 11:51:44.231: INFO: stderr: "" May 6 11:51:44.231: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 6 11:51:44.231: INFO: validating pod update-demo-nautilus-jj646 May 6 11:51:44.235: INFO: got data: { "image": "nautilus.jpg" } May 6 11:51:44.235: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 6 11:51:44.235: INFO: update-demo-nautilus-jj646 is verified up and running STEP: rolling-update to new replication controller May 6 11:51:44.238: INFO: scanned /root for discovery docs: May 6 11:51:44.238: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=e2e-tests-kubectl-g882p' May 6 11:52:06.829: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" May 6 11:52:06.829: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n" STEP: waiting for all containers in name=update-demo pods to come up. May 6 11:52:06.829: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-g882p' May 6 11:52:06.931: INFO: stderr: "" May 6 11:52:06.931: INFO: stdout: "update-demo-kitten-ht94r update-demo-kitten-rjrnk " May 6 11:52:06.931: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-ht94r -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-g882p' May 6 11:52:07.025: INFO: stderr: "" May 6 11:52:07.025: INFO: stdout: "true" May 6 11:52:07.025: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-ht94r -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-g882p' May 6 11:52:07.124: INFO: stderr: "" May 6 11:52:07.124: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" May 6 11:52:07.124: INFO: validating pod update-demo-kitten-ht94r May 6 11:52:07.132: INFO: got data: { "image": "kitten.jpg" } May 6 11:52:07.132: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . May 6 11:52:07.132: INFO: update-demo-kitten-ht94r is verified up and running May 6 11:52:07.132: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-rjrnk -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-g882p' May 6 11:52:07.234: INFO: stderr: "" May 6 11:52:07.234: INFO: stdout: "true" May 6 11:52:07.234: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-rjrnk -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-g882p' May 6 11:52:07.347: INFO: stderr: "" May 6 11:52:07.347: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" May 6 11:52:07.347: INFO: validating pod update-demo-kitten-rjrnk May 6 11:52:07.377: INFO: got data: { "image": "kitten.jpg" } May 6 11:52:07.377: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . May 6 11:52:07.377: INFO: update-demo-kitten-rjrnk is verified up and running [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 11:52:07.377: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-g882p" for this suite. May 6 11:52:29.396: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 11:52:29.428: INFO: namespace: e2e-tests-kubectl-g882p, resource: bindings, ignored listing per whitelist May 6 11:52:29.476: INFO: namespace e2e-tests-kubectl-g882p deletion completed in 22.095969621s • [SLOW TEST:51.372 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 11:52:29.476: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for all rs to be garbage collected STEP: expected 0 rs, got 1 rs STEP: expected 0 pods, got 2 pods STEP: Gathering metrics W0506 11:52:30.671034 7 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 6 11:52:30.671: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 11:52:30.671: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-d59gq" for this suite. May 6 11:52:36.690: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 11:52:36.714: INFO: namespace: e2e-tests-gc-d59gq, resource: bindings, ignored listing per whitelist May 6 11:52:36.767: INFO: namespace e2e-tests-gc-d59gq deletion completed in 6.092580465s • [SLOW TEST:7.291 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 11:52:36.767: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin May 6 11:52:36.875: INFO: Waiting up to 5m0s for pod "downwardapi-volume-14675f84-8f90-11ea-b5fe-0242ac110017" in namespace "e2e-tests-projected-h7jc7" to be "success or failure" May 6 11:52:36.879: INFO: Pod "downwardapi-volume-14675f84-8f90-11ea-b5fe-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 3.714353ms May 6 11:52:38.960: INFO: Pod "downwardapi-volume-14675f84-8f90-11ea-b5fe-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.085189118s May 6 11:52:40.964: INFO: Pod "downwardapi-volume-14675f84-8f90-11ea-b5fe-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.089656923s STEP: Saw pod success May 6 11:52:40.965: INFO: Pod "downwardapi-volume-14675f84-8f90-11ea-b5fe-0242ac110017" satisfied condition "success or failure" May 6 11:52:40.968: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-14675f84-8f90-11ea-b5fe-0242ac110017 container client-container: STEP: delete the pod May 6 11:52:40.986: INFO: Waiting for pod downwardapi-volume-14675f84-8f90-11ea-b5fe-0242ac110017 to disappear May 6 11:52:40.991: INFO: Pod downwardapi-volume-14675f84-8f90-11ea-b5fe-0242ac110017 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 11:52:40.991: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-h7jc7" for this suite. May 6 11:52:47.006: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 11:52:47.043: INFO: namespace: e2e-tests-projected-h7jc7, resource: bindings, ignored listing per whitelist May 6 11:52:47.084: INFO: namespace e2e-tests-projected-h7jc7 deletion completed in 6.090621088s • [SLOW TEST:10.317 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 11:52:47.085: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43 [It] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod May 6 11:52:47.222: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 11:52:54.464: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-init-container-b9qgq" for this suite. May 6 11:53:00.505: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 11:53:00.572: INFO: namespace: e2e-tests-init-container-b9qgq, resource: bindings, ignored listing per whitelist May 6 11:53:00.588: INFO: namespace e2e-tests-init-container-b9qgq deletion completed in 6.100764799s • [SLOW TEST:13.503 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run deployment should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 11:53:00.588: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1399 [It] should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine May 6 11:53:00.698: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --generator=deployment/v1beta1 --namespace=e2e-tests-kubectl-rvfms' May 6 11:53:00.803: INFO: stderr: "kubectl run --generator=deployment/v1beta1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" May 6 11:53:00.803: INFO: stdout: "deployment.extensions/e2e-test-nginx-deployment created\n" STEP: verifying the deployment e2e-test-nginx-deployment was created STEP: verifying the pod controlled by deployment e2e-test-nginx-deployment was created [AfterEach] [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1404 May 6 11:53:04.845: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=e2e-tests-kubectl-rvfms' May 6 11:53:04.954: INFO: stderr: "" May 6 11:53:04.954: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 11:53:04.954: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-rvfms" for this suite. May 6 11:53:17.011: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 11:53:17.039: INFO: namespace: e2e-tests-kubectl-rvfms, resource: bindings, ignored listing per whitelist May 6 11:53:17.094: INFO: namespace e2e-tests-kubectl-rvfms deletion completed in 12.136947407s • [SLOW TEST:16.506 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 11:53:17.095: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-2c74f2f2-8f90-11ea-b5fe-0242ac110017 STEP: Creating a pod to test consume configMaps May 6 11:53:17.240: INFO: Waiting up to 5m0s for pod "pod-configmaps-2c76b688-8f90-11ea-b5fe-0242ac110017" in namespace "e2e-tests-configmap-zq6dk" to be "success or failure" May 6 11:53:17.255: INFO: Pod "pod-configmaps-2c76b688-8f90-11ea-b5fe-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 15.756619ms May 6 11:53:19.259: INFO: Pod "pod-configmaps-2c76b688-8f90-11ea-b5fe-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019716095s May 6 11:53:21.264: INFO: Pod "pod-configmaps-2c76b688-8f90-11ea-b5fe-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.023952659s STEP: Saw pod success May 6 11:53:21.264: INFO: Pod "pod-configmaps-2c76b688-8f90-11ea-b5fe-0242ac110017" satisfied condition "success or failure" May 6 11:53:21.267: INFO: Trying to get logs from node hunter-worker2 pod pod-configmaps-2c76b688-8f90-11ea-b5fe-0242ac110017 container configmap-volume-test: STEP: delete the pod May 6 11:53:21.282: INFO: Waiting for pod pod-configmaps-2c76b688-8f90-11ea-b5fe-0242ac110017 to disappear May 6 11:53:21.286: INFO: Pod pod-configmaps-2c76b688-8f90-11ea-b5fe-0242ac110017 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 11:53:21.286: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-zq6dk" for this suite. May 6 11:53:27.324: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 11:53:27.374: INFO: namespace: e2e-tests-configmap-zq6dk, resource: bindings, ignored listing per whitelist May 6 11:53:27.398: INFO: namespace e2e-tests-configmap-zq6dk deletion completed in 6.108569117s • [SLOW TEST:10.303 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-apps] Deployment deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 11:53:27.398: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65 [It] deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 6 11:53:27.509: INFO: Creating deployment "nginx-deployment" May 6 11:53:27.539: INFO: Waiting for observed generation 1 May 6 11:53:29.953: INFO: Waiting for all required pods to come up May 6 11:53:30.181: INFO: Pod name nginx: Found 10 pods out of 10 STEP: ensuring each pod is running May 6 11:53:40.608: INFO: Waiting for deployment "nginx-deployment" to complete May 6 11:53:40.614: INFO: Updating deployment "nginx-deployment" with a non-existent image May 6 11:53:40.619: INFO: Updating deployment nginx-deployment May 6 11:53:40.620: INFO: Waiting for observed generation 2 May 6 11:53:42.809: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 May 6 11:53:42.849: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 May 6 11:53:42.852: INFO: Waiting for the first rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas May 6 11:53:42.860: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 May 6 11:53:42.860: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 May 6 11:53:42.863: INFO: Waiting for the second rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas May 6 11:53:42.867: INFO: Verifying that deployment "nginx-deployment" has minimum required number of available replicas May 6 11:53:42.867: INFO: Scaling up the deployment "nginx-deployment" from 10 to 30 May 6 11:53:42.873: INFO: Updating deployment nginx-deployment May 6 11:53:42.873: INFO: Waiting for the replicasets of deployment "nginx-deployment" to have desired number of replicas May 6 11:53:43.275: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 May 6 11:53:43.540: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59 May 6 11:53:45.553: INFO: Deployment "nginx-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment,GenerateName:,Namespace:e2e-tests-deployment-6r4mw,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-6r4mw/deployments/nginx-deployment,UID:32980768-8f90-11ea-99e8-0242ac110002,ResourceVersion:9040054,Generation:3,CreationTimestamp:2020-05-06 11:53:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*30,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:33,UpdatedReplicas:13,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[{Available False 2020-05-06 11:53:43 +0000 UTC 2020-05-06 11:53:43 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2020-05-06 11:53:44 +0000 UTC 2020-05-06 11:53:27 +0000 UTC ReplicaSetUpdated ReplicaSet "nginx-deployment-5c98f8fb5" is progressing.}],ReadyReplicas:8,CollisionCount:nil,},} May 6 11:53:45.556: INFO: New ReplicaSet "nginx-deployment-5c98f8fb5" of Deployment "nginx-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5,GenerateName:,Namespace:e2e-tests-deployment-6r4mw,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-6r4mw/replicasets/nginx-deployment-5c98f8fb5,UID:3a6892a3-8f90-11ea-99e8-0242ac110002,ResourceVersion:9040051,Generation:3,CreationTimestamp:2020-05-06 11:53:40 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment nginx-deployment 32980768-8f90-11ea-99e8-0242ac110002 0xc001f7a057 0xc001f7a058}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*13,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:13,FullyLabeledReplicas:13,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} May 6 11:53:45.556: INFO: All old ReplicaSets of Deployment "nginx-deployment": May 6 11:53:45.556: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d,GenerateName:,Namespace:e2e-tests-deployment-6r4mw,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-6r4mw/replicasets/nginx-deployment-85ddf47c5d,UID:329d63cf-8f90-11ea-99e8-0242ac110002,ResourceVersion:9040033,Generation:3,CreationTimestamp:2020-05-06 11:53:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment nginx-deployment 32980768-8f90-11ea-99e8-0242ac110002 0xc001f7a117 0xc001f7a118}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*20,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[],},} May 6 11:53:45.563: INFO: Pod "nginx-deployment-5c98f8fb5-57bd8" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-57bd8,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-6r4mw,SelfLink:/api/v1/namespaces/e2e-tests-deployment-6r4mw/pods/nginx-deployment-5c98f8fb5-57bd8,UID:3c41fe2e-8f90-11ea-99e8-0242ac110002,ResourceVersion:9040032,Generation:0,CreationTimestamp:2020-05-06 11:53:43 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 3a6892a3-8f90-11ea-99e8-0242ac110002 0xc001f7aad7 0xc001f7aad8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-mgsmn {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-mgsmn,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-mgsmn true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001f7ab50} {node.kubernetes.io/unreachable Exists NoExecute 0xc001f7ab70}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 11:53:43 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 6 11:53:45.563: INFO: Pod "nginx-deployment-5c98f8fb5-5mqjb" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-5mqjb,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-6r4mw,SelfLink:/api/v1/namespaces/e2e-tests-deployment-6r4mw/pods/nginx-deployment-5c98f8fb5-5mqjb,UID:3c268b9a-8f90-11ea-99e8-0242ac110002,ResourceVersion:9040012,Generation:0,CreationTimestamp:2020-05-06 11:53:43 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 3a6892a3-8f90-11ea-99e8-0242ac110002 0xc001f7abe7 0xc001f7abe8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-mgsmn {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-mgsmn,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-mgsmn true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001f7ac60} {node.kubernetes.io/unreachable Exists NoExecute 0xc001f7ac80}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 11:53:43 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 6 11:53:45.563: INFO: Pod "nginx-deployment-5c98f8fb5-5vqm9" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-5vqm9,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-6r4mw,SelfLink:/api/v1/namespaces/e2e-tests-deployment-6r4mw/pods/nginx-deployment-5c98f8fb5-5vqm9,UID:3c30278b-8f90-11ea-99e8-0242ac110002,ResourceVersion:9040022,Generation:0,CreationTimestamp:2020-05-06 11:53:43 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 3a6892a3-8f90-11ea-99e8-0242ac110002 0xc001f7ad07 0xc001f7ad08}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-mgsmn {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-mgsmn,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-mgsmn true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001f7ad80} {node.kubernetes.io/unreachable Exists NoExecute 0xc001f7ada0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 11:53:43 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 6 11:53:45.563: INFO: Pod "nginx-deployment-5c98f8fb5-7b6l4" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-7b6l4,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-6r4mw,SelfLink:/api/v1/namespaces/e2e-tests-deployment-6r4mw/pods/nginx-deployment-5c98f8fb5-7b6l4,UID:3c267e26-8f90-11ea-99e8-0242ac110002,ResourceVersion:9040063,Generation:0,CreationTimestamp:2020-05-06 11:53:43 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 3a6892a3-8f90-11ea-99e8-0242ac110002 0xc001f7ae17 0xc001f7ae18}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-mgsmn {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-mgsmn,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-mgsmn true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001f7ae90} {node.kubernetes.io/unreachable Exists NoExecute 0xc001f7aeb0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 11:53:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-06 11:53:43 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-06 11:53:43 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 11:53:43 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.4,PodIP:,StartTime:2020-05-06 11:53:43 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 6 11:53:45.564: INFO: Pod "nginx-deployment-5c98f8fb5-7gc2t" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-7gc2t,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-6r4mw,SelfLink:/api/v1/namespaces/e2e-tests-deployment-6r4mw/pods/nginx-deployment-5c98f8fb5-7gc2t,UID:3c304bed-8f90-11ea-99e8-0242ac110002,ResourceVersion:9040091,Generation:0,CreationTimestamp:2020-05-06 11:53:43 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 3a6892a3-8f90-11ea-99e8-0242ac110002 0xc001f7aff0 0xc001f7aff1}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-mgsmn {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-mgsmn,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-mgsmn true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001f7b070} {node.kubernetes.io/unreachable Exists NoExecute 0xc001f7b090}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 11:53:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-06 11:53:43 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-06 11:53:43 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 11:53:43 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.4,PodIP:,StartTime:2020-05-06 11:53:43 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 6 11:53:45.564: INFO: Pod "nginx-deployment-5c98f8fb5-7qwsd" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-7qwsd,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-6r4mw,SelfLink:/api/v1/namespaces/e2e-tests-deployment-6r4mw/pods/nginx-deployment-5c98f8fb5-7qwsd,UID:3a6afe31-8f90-11ea-99e8-0242ac110002,ResourceVersion:9039940,Generation:0,CreationTimestamp:2020-05-06 11:53:40 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 3a6892a3-8f90-11ea-99e8-0242ac110002 0xc001f7b150 0xc001f7b151}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-mgsmn {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-mgsmn,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-mgsmn true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001f7b1e0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001f7b230}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 11:53:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-06 11:53:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-06 11:53:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 11:53:40 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.3,PodIP:,StartTime:2020-05-06 11:53:40 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 6 11:53:45.564: INFO: Pod "nginx-deployment-5c98f8fb5-8h5r4" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-8h5r4,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-6r4mw,SelfLink:/api/v1/namespaces/e2e-tests-deployment-6r4mw/pods/nginx-deployment-5c98f8fb5-8h5r4,UID:3c3054ed-8f90-11ea-99e8-0242ac110002,ResourceVersion:9040027,Generation:0,CreationTimestamp:2020-05-06 11:53:43 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 3a6892a3-8f90-11ea-99e8-0242ac110002 0xc001f7b2f0 0xc001f7b2f1}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-mgsmn {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-mgsmn,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-mgsmn true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001f7b370} {node.kubernetes.io/unreachable Exists NoExecute 0xc001f7b390}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 11:53:43 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 6 11:53:45.564: INFO: Pod "nginx-deployment-5c98f8fb5-9w5pr" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-9w5pr,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-6r4mw,SelfLink:/api/v1/namespaces/e2e-tests-deployment-6r4mw/pods/nginx-deployment-5c98f8fb5-9w5pr,UID:3c305985-8f90-11ea-99e8-0242ac110002,ResourceVersion:9040021,Generation:0,CreationTimestamp:2020-05-06 11:53:43 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 3a6892a3-8f90-11ea-99e8-0242ac110002 0xc001f7b447 0xc001f7b448}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-mgsmn {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-mgsmn,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-mgsmn true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001f7b4c0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001f7b4e0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 11:53:43 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 6 11:53:45.564: INFO: Pod "nginx-deployment-5c98f8fb5-fb9mf" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-fb9mf,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-6r4mw,SelfLink:/api/v1/namespaces/e2e-tests-deployment-6r4mw/pods/nginx-deployment-5c98f8fb5-fb9mf,UID:3a8918b0-8f90-11ea-99e8-0242ac110002,ResourceVersion:9039964,Generation:0,CreationTimestamp:2020-05-06 11:53:40 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 3a6892a3-8f90-11ea-99e8-0242ac110002 0xc001f7b557 0xc001f7b558}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-mgsmn {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-mgsmn,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-mgsmn true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001f7b5d0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001f7b5f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 11:53:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-06 11:53:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-06 11:53:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 11:53:40 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.3,PodIP:,StartTime:2020-05-06 11:53:40 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 6 11:53:45.565: INFO: Pod "nginx-deployment-5c98f8fb5-gjhmf" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-gjhmf,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-6r4mw,SelfLink:/api/v1/namespaces/e2e-tests-deployment-6r4mw/pods/nginx-deployment-5c98f8fb5-gjhmf,UID:3bfde2b0-8f90-11ea-99e8-0242ac110002,ResourceVersion:9040056,Generation:0,CreationTimestamp:2020-05-06 11:53:43 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 3a6892a3-8f90-11ea-99e8-0242ac110002 0xc001f7b730 0xc001f7b731}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-mgsmn {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-mgsmn,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-mgsmn true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001f7b7b0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001f7b7e0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 11:53:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-06 11:53:43 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-06 11:53:43 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 11:53:43 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.3,PodIP:,StartTime:2020-05-06 11:53:43 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 6 11:53:45.565: INFO: Pod "nginx-deployment-5c98f8fb5-hlf9r" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-hlf9r,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-6r4mw,SelfLink:/api/v1/namespaces/e2e-tests-deployment-6r4mw/pods/nginx-deployment-5c98f8fb5-hlf9r,UID:3a92da93-8f90-11ea-99e8-0242ac110002,ResourceVersion:9039965,Generation:0,CreationTimestamp:2020-05-06 11:53:40 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 3a6892a3-8f90-11ea-99e8-0242ac110002 0xc001f7b8b0 0xc001f7b8b1}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-mgsmn {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-mgsmn,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-mgsmn true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001f7b930} {node.kubernetes.io/unreachable Exists NoExecute 0xc001f7b950}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 11:53:41 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-06 11:53:41 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-06 11:53:41 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 11:53:40 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.4,PodIP:,StartTime:2020-05-06 11:53:41 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 6 11:53:45.565: INFO: Pod "nginx-deployment-5c98f8fb5-qj7z2" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-qj7z2,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-6r4mw,SelfLink:/api/v1/namespaces/e2e-tests-deployment-6r4mw/pods/nginx-deployment-5c98f8fb5-qj7z2,UID:3a706a30-8f90-11ea-99e8-0242ac110002,ResourceVersion:9039960,Generation:0,CreationTimestamp:2020-05-06 11:53:40 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 3a6892a3-8f90-11ea-99e8-0242ac110002 0xc001f7ba80 0xc001f7ba81}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-mgsmn {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-mgsmn,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-mgsmn true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001f7bb00} {node.kubernetes.io/unreachable Exists NoExecute 0xc001f7bb20}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 11:53:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-06 11:53:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-06 11:53:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 11:53:40 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.4,PodIP:,StartTime:2020-05-06 11:53:40 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 6 11:53:45.565: INFO: Pod "nginx-deployment-5c98f8fb5-xpb78" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-xpb78,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-6r4mw,SelfLink:/api/v1/namespaces/e2e-tests-deployment-6r4mw/pods/nginx-deployment-5c98f8fb5-xpb78,UID:3a70880d-8f90-11ea-99e8-0242ac110002,ResourceVersion:9039950,Generation:0,CreationTimestamp:2020-05-06 11:53:40 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 3a6892a3-8f90-11ea-99e8-0242ac110002 0xc001f7bbe0 0xc001f7bbe1}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-mgsmn {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-mgsmn,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-mgsmn true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001f7bea0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001f7bec0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 11:53:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-06 11:53:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-06 11:53:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 11:53:40 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.4,PodIP:,StartTime:2020-05-06 11:53:40 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 6 11:53:45.566: INFO: Pod "nginx-deployment-85ddf47c5d-42gfs" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-42gfs,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-6r4mw,SelfLink:/api/v1/namespaces/e2e-tests-deployment-6r4mw/pods/nginx-deployment-85ddf47c5d-42gfs,UID:3c311b0e-8f90-11ea-99e8-0242ac110002,ResourceVersion:9040028,Generation:0,CreationTimestamp:2020-05-06 11:53:43 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 329d63cf-8f90-11ea-99e8-0242ac110002 0xc001602010 0xc001602011}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-mgsmn {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-mgsmn,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-mgsmn true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001602080} {node.kubernetes.io/unreachable Exists NoExecute 0xc0016020a0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 11:53:43 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 6 11:53:45.566: INFO: Pod "nginx-deployment-85ddf47c5d-5gvtq" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-5gvtq,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-6r4mw,SelfLink:/api/v1/namespaces/e2e-tests-deployment-6r4mw/pods/nginx-deployment-85ddf47c5d-5gvtq,UID:3c26a7af-8f90-11ea-99e8-0242ac110002,ResourceVersion:9040011,Generation:0,CreationTimestamp:2020-05-06 11:53:43 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 329d63cf-8f90-11ea-99e8-0242ac110002 0xc001602117 0xc001602118}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-mgsmn {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-mgsmn,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-mgsmn true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001602190} {node.kubernetes.io/unreachable Exists NoExecute 0xc0016021b0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 11:53:43 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 6 11:53:45.566: INFO: Pod "nginx-deployment-85ddf47c5d-5rx4h" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-5rx4h,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-6r4mw,SelfLink:/api/v1/namespaces/e2e-tests-deployment-6r4mw/pods/nginx-deployment-85ddf47c5d-5rx4h,UID:32abb4f5-8f90-11ea-99e8-0242ac110002,ResourceVersion:9039878,Generation:0,CreationTimestamp:2020-05-06 11:53:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 329d63cf-8f90-11ea-99e8-0242ac110002 0xc001602227 0xc001602228}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-mgsmn {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-mgsmn,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-mgsmn true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0016022b0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0016022d0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 11:53:27 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 11:53:36 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 11:53:36 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 11:53:27 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.4,PodIP:10.244.2.167,StartTime:2020-05-06 11:53:27 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-05-06 11:53:36 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://ddc96d57b7bdee51bc909109881161049ca50382b935c5e8f5e5244b23fe8665}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 6 11:53:45.566: INFO: Pod "nginx-deployment-85ddf47c5d-6rqpn" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-6rqpn,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-6r4mw,SelfLink:/api/v1/namespaces/e2e-tests-deployment-6r4mw/pods/nginx-deployment-85ddf47c5d-6rqpn,UID:32b3c150-8f90-11ea-99e8-0242ac110002,ResourceVersion:9039901,Generation:0,CreationTimestamp:2020-05-06 11:53:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 329d63cf-8f90-11ea-99e8-0242ac110002 0xc001602397 0xc001602398}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-mgsmn {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-mgsmn,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-mgsmn true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001602410} {node.kubernetes.io/unreachable Exists NoExecute 0xc001602430}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 11:53:27 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 11:53:38 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 11:53:38 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 11:53:27 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.3,PodIP:10.244.1.192,StartTime:2020-05-06 11:53:27 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-05-06 11:53:37 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://b361b1d4c721e2a34bd15a62fdafec29f682cc140e21618a299d1d78dc45a1b6}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 6 11:53:45.566: INFO: Pod "nginx-deployment-85ddf47c5d-782rn" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-782rn,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-6r4mw,SelfLink:/api/v1/namespaces/e2e-tests-deployment-6r4mw/pods/nginx-deployment-85ddf47c5d-782rn,UID:32aba15b-8f90-11ea-99e8-0242ac110002,ResourceVersion:9039895,Generation:0,CreationTimestamp:2020-05-06 11:53:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 329d63cf-8f90-11ea-99e8-0242ac110002 0xc001602747 0xc001602748}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-mgsmn {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-mgsmn,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-mgsmn true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0016028b0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001602920}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 11:53:27 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 11:53:37 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 11:53:37 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 11:53:27 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.3,PodIP:10.244.1.191,StartTime:2020-05-06 11:53:27 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-05-06 11:53:36 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://76694ebb541e6fb125d5ac0855c623784749e22f7105bc607b07f7d7ac522861}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 6 11:53:45.566: INFO: Pod "nginx-deployment-85ddf47c5d-8flgd" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-8flgd,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-6r4mw,SelfLink:/api/v1/namespaces/e2e-tests-deployment-6r4mw/pods/nginx-deployment-85ddf47c5d-8flgd,UID:3bf3d42e-8f90-11ea-99e8-0242ac110002,ResourceVersion:9040018,Generation:0,CreationTimestamp:2020-05-06 11:53:43 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 329d63cf-8f90-11ea-99e8-0242ac110002 0xc0016029e7 0xc0016029e8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-mgsmn {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-mgsmn,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-mgsmn true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001602aa0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001602ac0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 11:53:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-06 11:53:43 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-06 11:53:43 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 11:53:43 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.3,PodIP:,StartTime:2020-05-06 11:53:43 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 6 11:53:45.567: INFO: Pod "nginx-deployment-85ddf47c5d-cbtsj" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-cbtsj,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-6r4mw,SelfLink:/api/v1/namespaces/e2e-tests-deployment-6r4mw/pods/nginx-deployment-85ddf47c5d-cbtsj,UID:3c267a7d-8f90-11ea-99e8-0242ac110002,ResourceVersion:9040044,Generation:0,CreationTimestamp:2020-05-06 11:53:43 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 329d63cf-8f90-11ea-99e8-0242ac110002 0xc001602b77 0xc001602b78}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-mgsmn {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-mgsmn,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-mgsmn true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001602c20} {node.kubernetes.io/unreachable Exists NoExecute 0xc001602c40}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 11:53:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-06 11:53:43 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-06 11:53:43 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 11:53:43 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.4,PodIP:,StartTime:2020-05-06 11:53:43 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 6 11:53:45.567: INFO: Pod "nginx-deployment-85ddf47c5d-ch6w2" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-ch6w2,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-6r4mw,SelfLink:/api/v1/namespaces/e2e-tests-deployment-6r4mw/pods/nginx-deployment-85ddf47c5d-ch6w2,UID:3bfde4bf-8f90-11ea-99e8-0242ac110002,ResourceVersion:9040046,Generation:0,CreationTimestamp:2020-05-06 11:53:43 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 329d63cf-8f90-11ea-99e8-0242ac110002 0xc001602d67 0xc001602d68}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-mgsmn {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-mgsmn,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-mgsmn true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001602e50} {node.kubernetes.io/unreachable Exists NoExecute 0xc001602e70}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 11:53:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-06 11:53:43 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-06 11:53:43 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 11:53:43 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.3,PodIP:,StartTime:2020-05-06 11:53:43 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 6 11:53:45.567: INFO: Pod "nginx-deployment-85ddf47c5d-cnsrz" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-cnsrz,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-6r4mw,SelfLink:/api/v1/namespaces/e2e-tests-deployment-6r4mw/pods/nginx-deployment-85ddf47c5d-cnsrz,UID:32b3c235-8f90-11ea-99e8-0242ac110002,ResourceVersion:9039890,Generation:0,CreationTimestamp:2020-05-06 11:53:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 329d63cf-8f90-11ea-99e8-0242ac110002 0xc001602f27 0xc001602f28}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-mgsmn {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-mgsmn,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-mgsmn true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001603010} {node.kubernetes.io/unreachable Exists NoExecute 0xc001603030}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 11:53:27 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 11:53:37 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 11:53:37 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 11:53:27 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.4,PodIP:10.244.2.169,StartTime:2020-05-06 11:53:27 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-05-06 11:53:36 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://ae083a8cd8a7847d33b0a25ef5d8d6e06f50bf940f877631987ff3f363735c83}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 6 11:53:45.567: INFO: Pod "nginx-deployment-85ddf47c5d-cz7wc" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-cz7wc,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-6r4mw,SelfLink:/api/v1/namespaces/e2e-tests-deployment-6r4mw/pods/nginx-deployment-85ddf47c5d-cz7wc,UID:3c30ff75-8f90-11ea-99e8-0242ac110002,ResourceVersion:9040030,Generation:0,CreationTimestamp:2020-05-06 11:53:43 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 329d63cf-8f90-11ea-99e8-0242ac110002 0xc001603287 0xc001603288}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-mgsmn {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-mgsmn,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-mgsmn true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001603360} {node.kubernetes.io/unreachable Exists NoExecute 0xc001603380}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 11:53:43 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 6 11:53:45.567: INFO: Pod "nginx-deployment-85ddf47c5d-dfxpg" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-dfxpg,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-6r4mw,SelfLink:/api/v1/namespaces/e2e-tests-deployment-6r4mw/pods/nginx-deployment-85ddf47c5d-dfxpg,UID:3bfdd204-8f90-11ea-99e8-0242ac110002,ResourceVersion:9040040,Generation:0,CreationTimestamp:2020-05-06 11:53:43 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 329d63cf-8f90-11ea-99e8-0242ac110002 0xc0016034c7 0xc0016034c8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-mgsmn {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-mgsmn,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-mgsmn true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001603560} {node.kubernetes.io/unreachable Exists NoExecute 0xc001603580}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 11:53:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-06 11:53:43 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-06 11:53:43 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 11:53:43 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.3,PodIP:,StartTime:2020-05-06 11:53:43 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 6 11:53:45.568: INFO: Pod "nginx-deployment-85ddf47c5d-dz5qf" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-dz5qf,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-6r4mw,SelfLink:/api/v1/namespaces/e2e-tests-deployment-6r4mw/pods/nginx-deployment-85ddf47c5d-dz5qf,UID:3c268cd7-8f90-11ea-99e8-0242ac110002,ResourceVersion:9040087,Generation:0,CreationTimestamp:2020-05-06 11:53:43 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 329d63cf-8f90-11ea-99e8-0242ac110002 0xc001603757 0xc001603758}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-mgsmn {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-mgsmn,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-mgsmn true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001603800} {node.kubernetes.io/unreachable Exists NoExecute 0xc001603830}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 11:53:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-06 11:53:43 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-06 11:53:43 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 11:53:43 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.3,PodIP:,StartTime:2020-05-06 11:53:43 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 6 11:53:45.568: INFO: Pod "nginx-deployment-85ddf47c5d-gvf9x" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-gvf9x,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-6r4mw,SelfLink:/api/v1/namespaces/e2e-tests-deployment-6r4mw/pods/nginx-deployment-85ddf47c5d-gvf9x,UID:32a81e85-8f90-11ea-99e8-0242ac110002,ResourceVersion:9039856,Generation:0,CreationTimestamp:2020-05-06 11:53:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 329d63cf-8f90-11ea-99e8-0242ac110002 0xc001603957 0xc001603958}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-mgsmn {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-mgsmn,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-mgsmn true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001603c00} {node.kubernetes.io/unreachable Exists NoExecute 0xc001603c20}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 11:53:27 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 11:53:32 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 11:53:32 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 11:53:27 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.4,PodIP:10.244.2.165,StartTime:2020-05-06 11:53:27 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-05-06 11:53:32 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://b41d2129a97fe8f3e705cf12c52eb495461f9a2c08e2ee0bb25fc97e8769fd96}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 6 11:53:45.568: INFO: Pod "nginx-deployment-85ddf47c5d-kpg52" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-kpg52,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-6r4mw,SelfLink:/api/v1/namespaces/e2e-tests-deployment-6r4mw/pods/nginx-deployment-85ddf47c5d-kpg52,UID:32ab9ad0-8f90-11ea-99e8-0242ac110002,ResourceVersion:9039894,Generation:0,CreationTimestamp:2020-05-06 11:53:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 329d63cf-8f90-11ea-99e8-0242ac110002 0xc002036057 0xc002036058}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-mgsmn {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-mgsmn,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-mgsmn true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0020360e0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002036100}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 11:53:27 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 11:53:37 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 11:53:37 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 11:53:27 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.4,PodIP:10.244.2.168,StartTime:2020-05-06 11:53:27 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-05-06 11:53:36 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://b8d3b85b08b1c46ca52197c846d8c1054339e29d6a7179c319a0c0ba8051817b}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 6 11:53:45.568: INFO: Pod "nginx-deployment-85ddf47c5d-lwbd2" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-lwbd2,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-6r4mw,SelfLink:/api/v1/namespaces/e2e-tests-deployment-6r4mw/pods/nginx-deployment-85ddf47c5d-lwbd2,UID:3c30fc77-8f90-11ea-99e8-0242ac110002,ResourceVersion:9040029,Generation:0,CreationTimestamp:2020-05-06 11:53:43 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 329d63cf-8f90-11ea-99e8-0242ac110002 0xc0020361c7 0xc0020361c8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-mgsmn {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-mgsmn,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-mgsmn true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0020364c0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0020364e0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 11:53:43 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 6 11:53:45.568: INFO: Pod "nginx-deployment-85ddf47c5d-nsfl8" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-nsfl8,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-6r4mw,SelfLink:/api/v1/namespaces/e2e-tests-deployment-6r4mw/pods/nginx-deployment-85ddf47c5d-nsfl8,UID:3c30f02b-8f90-11ea-99e8-0242ac110002,ResourceVersion:9040025,Generation:0,CreationTimestamp:2020-05-06 11:53:43 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 329d63cf-8f90-11ea-99e8-0242ac110002 0xc0020365b7 0xc0020365b8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-mgsmn {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-mgsmn,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-mgsmn true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002036640} {node.kubernetes.io/unreachable Exists NoExecute 0xc002036660}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 11:53:43 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 6 11:53:45.568: INFO: Pod "nginx-deployment-85ddf47c5d-qnr7m" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-qnr7m,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-6r4mw,SelfLink:/api/v1/namespaces/e2e-tests-deployment-6r4mw/pods/nginx-deployment-85ddf47c5d-qnr7m,UID:32a9433b-8f90-11ea-99e8-0242ac110002,ResourceVersion:9039891,Generation:0,CreationTimestamp:2020-05-06 11:53:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 329d63cf-8f90-11ea-99e8-0242ac110002 0xc0020366d7 0xc0020366d8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-mgsmn {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-mgsmn,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-mgsmn true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0020367e0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002036830}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 11:53:27 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 11:53:37 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 11:53:37 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 11:53:27 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.3,PodIP:10.244.1.190,StartTime:2020-05-06 11:53:27 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-05-06 11:53:36 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://65d32faf3651b89f3fed3e141dc7fa3c1cbc81d59a347a441fa085da750e3ced}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 6 11:53:45.569: INFO: Pod "nginx-deployment-85ddf47c5d-rmfhx" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-rmfhx,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-6r4mw,SelfLink:/api/v1/namespaces/e2e-tests-deployment-6r4mw/pods/nginx-deployment-85ddf47c5d-rmfhx,UID:3c263a57-8f90-11ea-99e8-0242ac110002,ResourceVersion:9040053,Generation:0,CreationTimestamp:2020-05-06 11:53:43 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 329d63cf-8f90-11ea-99e8-0242ac110002 0xc0020369d7 0xc0020369d8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-mgsmn {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-mgsmn,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-mgsmn true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002036a60} {node.kubernetes.io/unreachable Exists NoExecute 0xc002036b10}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 11:53:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-06 11:53:43 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-06 11:53:43 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 11:53:43 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.4,PodIP:,StartTime:2020-05-06 11:53:43 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 6 11:53:45.569: INFO: Pod "nginx-deployment-85ddf47c5d-vs7vh" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-vs7vh,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-6r4mw,SelfLink:/api/v1/namespaces/e2e-tests-deployment-6r4mw/pods/nginx-deployment-85ddf47c5d-vs7vh,UID:3c311866-8f90-11ea-99e8-0242ac110002,ResourceVersion:9040026,Generation:0,CreationTimestamp:2020-05-06 11:53:43 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 329d63cf-8f90-11ea-99e8-0242ac110002 0xc002036c07 0xc002036c08}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-mgsmn {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-mgsmn,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-mgsmn true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002036cd0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002036cf0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 11:53:43 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 6 11:53:45.569: INFO: Pod "nginx-deployment-85ddf47c5d-w8vs4" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-w8vs4,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-6r4mw,SelfLink:/api/v1/namespaces/e2e-tests-deployment-6r4mw/pods/nginx-deployment-85ddf47c5d-w8vs4,UID:32a9540c-8f90-11ea-99e8-0242ac110002,ResourceVersion:9039866,Generation:0,CreationTimestamp:2020-05-06 11:53:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 329d63cf-8f90-11ea-99e8-0242ac110002 0xc002036d67 0xc002036d68}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-mgsmn {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-mgsmn,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-mgsmn true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002036eb0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002036ed0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 11:53:27 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 11:53:34 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 11:53:34 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 11:53:27 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.4,PodIP:10.244.2.166,StartTime:2020-05-06 11:53:27 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-05-06 11:53:33 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://41f731fdcd7b3c65fb555b4bee942d2a0170aad9576383a3d649e889d14c15ac}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 11:53:45.569: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-deployment-6r4mw" for this suite. May 6 11:54:17.723: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 11:54:17.791: INFO: namespace: e2e-tests-deployment-6r4mw, resource: bindings, ignored listing per whitelist May 6 11:54:17.798: INFO: namespace e2e-tests-deployment-6r4mw deletion completed in 32.225820575s • [SLOW TEST:50.400 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 11:54:17.798: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with secret that has name projected-secret-test-map-50a123f9-8f90-11ea-b5fe-0242ac110017 STEP: Creating a pod to test consume secrets May 6 11:54:17.959: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-50a8e593-8f90-11ea-b5fe-0242ac110017" in namespace "e2e-tests-projected-lvppb" to be "success or failure" May 6 11:54:17.965: INFO: Pod "pod-projected-secrets-50a8e593-8f90-11ea-b5fe-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 5.88776ms May 6 11:54:19.969: INFO: Pod "pod-projected-secrets-50a8e593-8f90-11ea-b5fe-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010054756s May 6 11:54:21.974: INFO: Pod "pod-projected-secrets-50a8e593-8f90-11ea-b5fe-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.014359009s STEP: Saw pod success May 6 11:54:21.974: INFO: Pod "pod-projected-secrets-50a8e593-8f90-11ea-b5fe-0242ac110017" satisfied condition "success or failure" May 6 11:54:21.977: INFO: Trying to get logs from node hunter-worker pod pod-projected-secrets-50a8e593-8f90-11ea-b5fe-0242ac110017 container projected-secret-volume-test: STEP: delete the pod May 6 11:54:22.109: INFO: Waiting for pod pod-projected-secrets-50a8e593-8f90-11ea-b5fe-0242ac110017 to disappear May 6 11:54:22.112: INFO: Pod pod-projected-secrets-50a8e593-8f90-11ea-b5fe-0242ac110017 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 11:54:22.112: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-lvppb" for this suite. May 6 11:54:28.148: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 11:54:28.164: INFO: namespace: e2e-tests-projected-lvppb, resource: bindings, ignored listing per whitelist May 6 11:54:28.218: INFO: namespace e2e-tests-projected-lvppb deletion completed in 6.103559676s • [SLOW TEST:10.421 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-network] Services should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 11:54:28.219: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:85 [It] should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 11:54:28.350: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-services-8rz2l" for this suite. May 6 11:54:34.402: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 11:54:34.415: INFO: namespace: e2e-tests-services-8rz2l, resource: bindings, ignored listing per whitelist May 6 11:54:34.464: INFO: namespace e2e-tests-services-8rz2l deletion completed in 6.085300891s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:90 • [SLOW TEST:6.245 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 11:54:34.464: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin May 6 11:54:35.054: INFO: Waiting up to 5m0s for pod "downwardapi-volume-5a9e3488-8f90-11ea-b5fe-0242ac110017" in namespace "e2e-tests-projected-kbjz5" to be "success or failure" May 6 11:54:35.110: INFO: Pod "downwardapi-volume-5a9e3488-8f90-11ea-b5fe-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 56.180234ms May 6 11:54:37.114: INFO: Pod "downwardapi-volume-5a9e3488-8f90-11ea-b5fe-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.060416362s May 6 11:54:39.118: INFO: Pod "downwardapi-volume-5a9e3488-8f90-11ea-b5fe-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 4.064756226s May 6 11:54:41.122: INFO: Pod "downwardapi-volume-5a9e3488-8f90-11ea-b5fe-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.068351803s STEP: Saw pod success May 6 11:54:41.122: INFO: Pod "downwardapi-volume-5a9e3488-8f90-11ea-b5fe-0242ac110017" satisfied condition "success or failure" May 6 11:54:41.125: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-5a9e3488-8f90-11ea-b5fe-0242ac110017 container client-container: STEP: delete the pod May 6 11:54:41.165: INFO: Waiting for pod downwardapi-volume-5a9e3488-8f90-11ea-b5fe-0242ac110017 to disappear May 6 11:54:41.211: INFO: Pod downwardapi-volume-5a9e3488-8f90-11ea-b5fe-0242ac110017 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 11:54:41.211: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-kbjz5" for this suite. May 6 11:54:47.321: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 11:54:47.347: INFO: namespace: e2e-tests-projected-kbjz5, resource: bindings, ignored listing per whitelist May 6 11:54:47.428: INFO: namespace e2e-tests-projected-kbjz5 deletion completed in 6.212656887s • [SLOW TEST:12.964 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-network] Services should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 11:54:47.428: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:85 [It] should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating service multi-endpoint-test in namespace e2e-tests-services-gjs7v STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-gjs7v to expose endpoints map[] May 6 11:54:47.751: INFO: Get endpoints failed (2.137696ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found May 6 11:54:48.754: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-gjs7v exposes endpoints map[] (1.004931163s elapsed) STEP: Creating pod pod1 in namespace e2e-tests-services-gjs7v STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-gjs7v to expose endpoints map[pod1:[100]] May 6 11:54:51.907: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-gjs7v exposes endpoints map[pod1:[100]] (3.148529643s elapsed) STEP: Creating pod pod2 in namespace e2e-tests-services-gjs7v STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-gjs7v to expose endpoints map[pod1:[100] pod2:[101]] May 6 11:54:56.173: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-gjs7v exposes endpoints map[pod1:[100] pod2:[101]] (4.263488082s elapsed) STEP: Deleting pod pod1 in namespace e2e-tests-services-gjs7v STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-gjs7v to expose endpoints map[pod2:[101]] May 6 11:54:57.265: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-gjs7v exposes endpoints map[pod2:[101]] (1.088572692s elapsed) STEP: Deleting pod pod2 in namespace e2e-tests-services-gjs7v STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-gjs7v to expose endpoints map[] May 6 11:54:58.348: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-gjs7v exposes endpoints map[] (1.080106003s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 11:54:58.559: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-services-gjs7v" for this suite. May 6 11:55:20.810: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 11:55:20.845: INFO: namespace: e2e-tests-services-gjs7v, resource: bindings, ignored listing per whitelist May 6 11:55:20.871: INFO: namespace e2e-tests-services-gjs7v deletion completed in 22.276041567s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:90 • [SLOW TEST:33.443 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 11:55:20.871: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-tvkcs STEP: creating a selector STEP: Creating the service pods in kubernetes May 6 11:55:21.140: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods May 6 11:55:47.307: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.211:8080/dial?request=hostName&protocol=udp&host=10.244.2.184&port=8081&tries=1'] Namespace:e2e-tests-pod-network-test-tvkcs PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 6 11:55:47.307: INFO: >>> kubeConfig: /root/.kube/config I0506 11:55:47.336685 7 log.go:172] (0xc001cae2c0) (0xc000ac55e0) Create stream I0506 11:55:47.336714 7 log.go:172] (0xc001cae2c0) (0xc000ac55e0) Stream added, broadcasting: 1 I0506 11:55:47.338697 7 log.go:172] (0xc001cae2c0) Reply frame received for 1 I0506 11:55:47.338811 7 log.go:172] (0xc001cae2c0) (0xc0024fc780) Create stream I0506 11:55:47.338837 7 log.go:172] (0xc001cae2c0) (0xc0024fc780) Stream added, broadcasting: 3 I0506 11:55:47.339749 7 log.go:172] (0xc001cae2c0) Reply frame received for 3 I0506 11:55:47.339801 7 log.go:172] (0xc001cae2c0) (0xc000ac5680) Create stream I0506 11:55:47.339816 7 log.go:172] (0xc001cae2c0) (0xc000ac5680) Stream added, broadcasting: 5 I0506 11:55:47.340598 7 log.go:172] (0xc001cae2c0) Reply frame received for 5 I0506 11:55:47.503599 7 log.go:172] (0xc001cae2c0) Data frame received for 3 I0506 11:55:47.503626 7 log.go:172] (0xc0024fc780) (3) Data frame handling I0506 11:55:47.503649 7 log.go:172] (0xc0024fc780) (3) Data frame sent I0506 11:55:47.504182 7 log.go:172] (0xc001cae2c0) Data frame received for 3 I0506 11:55:47.504217 7 log.go:172] (0xc0024fc780) (3) Data frame handling I0506 11:55:47.504250 7 log.go:172] (0xc001cae2c0) Data frame received for 5 I0506 11:55:47.504264 7 log.go:172] (0xc000ac5680) (5) Data frame handling I0506 11:55:47.505762 7 log.go:172] (0xc001cae2c0) Data frame received for 1 I0506 11:55:47.505780 7 log.go:172] (0xc000ac55e0) (1) Data frame handling I0506 11:55:47.505792 7 log.go:172] (0xc000ac55e0) (1) Data frame sent I0506 11:55:47.505803 7 log.go:172] (0xc001cae2c0) (0xc000ac55e0) Stream removed, broadcasting: 1 I0506 11:55:47.505818 7 log.go:172] (0xc001cae2c0) Go away received I0506 11:55:47.505931 7 log.go:172] (0xc001cae2c0) (0xc000ac55e0) Stream removed, broadcasting: 1 I0506 11:55:47.505951 7 log.go:172] (0xc001cae2c0) (0xc0024fc780) Stream removed, broadcasting: 3 I0506 11:55:47.505961 7 log.go:172] (0xc001cae2c0) (0xc000ac5680) Stream removed, broadcasting: 5 May 6 11:55:47.505: INFO: Waiting for endpoints: map[] May 6 11:55:47.508: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.211:8080/dial?request=hostName&protocol=udp&host=10.244.1.210&port=8081&tries=1'] Namespace:e2e-tests-pod-network-test-tvkcs PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 6 11:55:47.508: INFO: >>> kubeConfig: /root/.kube/config I0506 11:55:47.541272 7 log.go:172] (0xc001cae790) (0xc000ac5cc0) Create stream I0506 11:55:47.541300 7 log.go:172] (0xc001cae790) (0xc000ac5cc0) Stream added, broadcasting: 1 I0506 11:55:47.542648 7 log.go:172] (0xc001cae790) Reply frame received for 1 I0506 11:55:47.542670 7 log.go:172] (0xc001cae790) (0xc000ac5d60) Create stream I0506 11:55:47.542676 7 log.go:172] (0xc001cae790) (0xc000ac5d60) Stream added, broadcasting: 3 I0506 11:55:47.543730 7 log.go:172] (0xc001cae790) Reply frame received for 3 I0506 11:55:47.543757 7 log.go:172] (0xc001cae790) (0xc001c55540) Create stream I0506 11:55:47.543777 7 log.go:172] (0xc001cae790) (0xc001c55540) Stream added, broadcasting: 5 I0506 11:55:47.544671 7 log.go:172] (0xc001cae790) Reply frame received for 5 I0506 11:55:47.599598 7 log.go:172] (0xc001cae790) Data frame received for 3 I0506 11:55:47.599625 7 log.go:172] (0xc000ac5d60) (3) Data frame handling I0506 11:55:47.599634 7 log.go:172] (0xc000ac5d60) (3) Data frame sent I0506 11:55:47.600800 7 log.go:172] (0xc001cae790) Data frame received for 5 I0506 11:55:47.600842 7 log.go:172] (0xc001c55540) (5) Data frame handling I0506 11:55:47.600877 7 log.go:172] (0xc001cae790) Data frame received for 3 I0506 11:55:47.600902 7 log.go:172] (0xc000ac5d60) (3) Data frame handling I0506 11:55:47.602715 7 log.go:172] (0xc001cae790) Data frame received for 1 I0506 11:55:47.602744 7 log.go:172] (0xc000ac5cc0) (1) Data frame handling I0506 11:55:47.602761 7 log.go:172] (0xc000ac5cc0) (1) Data frame sent I0506 11:55:47.602778 7 log.go:172] (0xc001cae790) (0xc000ac5cc0) Stream removed, broadcasting: 1 I0506 11:55:47.602849 7 log.go:172] (0xc001cae790) (0xc000ac5cc0) Stream removed, broadcasting: 1 I0506 11:55:47.602864 7 log.go:172] (0xc001cae790) (0xc000ac5d60) Stream removed, broadcasting: 3 I0506 11:55:47.602878 7 log.go:172] (0xc001cae790) (0xc001c55540) Stream removed, broadcasting: 5 May 6 11:55:47.602: INFO: Waiting for endpoints: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 11:55:47.602: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready I0506 11:55:47.603202 7 log.go:172] (0xc001cae790) Go away received STEP: Destroying namespace "e2e-tests-pod-network-test-tvkcs" for this suite. May 6 11:56:09.665: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 11:56:09.702: INFO: namespace: e2e-tests-pod-network-test-tvkcs, resource: bindings, ignored listing per whitelist May 6 11:56:09.768: INFO: namespace e2e-tests-pod-network-test-tvkcs deletion completed in 22.131148899s • [SLOW TEST:48.897 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 11:56:09.769: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin May 6 11:56:09.934: INFO: Waiting up to 5m0s for pod "downwardapi-volume-9365b1c3-8f90-11ea-b5fe-0242ac110017" in namespace "e2e-tests-projected-j2fdh" to be "success or failure" May 6 11:56:09.937: INFO: Pod "downwardapi-volume-9365b1c3-8f90-11ea-b5fe-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 3.450098ms May 6 11:56:11.972: INFO: Pod "downwardapi-volume-9365b1c3-8f90-11ea-b5fe-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.038257567s May 6 11:56:13.979: INFO: Pod "downwardapi-volume-9365b1c3-8f90-11ea-b5fe-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 4.044784011s May 6 11:56:15.982: INFO: Pod "downwardapi-volume-9365b1c3-8f90-11ea-b5fe-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.048741246s STEP: Saw pod success May 6 11:56:15.983: INFO: Pod "downwardapi-volume-9365b1c3-8f90-11ea-b5fe-0242ac110017" satisfied condition "success or failure" May 6 11:56:15.986: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-9365b1c3-8f90-11ea-b5fe-0242ac110017 container client-container: STEP: delete the pod May 6 11:56:16.005: INFO: Waiting for pod downwardapi-volume-9365b1c3-8f90-11ea-b5fe-0242ac110017 to disappear May 6 11:56:16.009: INFO: Pod downwardapi-volume-9365b1c3-8f90-11ea-b5fe-0242ac110017 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 11:56:16.009: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-j2fdh" for this suite. May 6 11:56:22.104: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 11:56:22.170: INFO: namespace: e2e-tests-projected-j2fdh, resource: bindings, ignored listing per whitelist May 6 11:56:22.188: INFO: namespace e2e-tests-projected-j2fdh deletion completed in 6.175480342s • [SLOW TEST:12.419 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 11:56:22.188: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a new configmap STEP: modifying the configmap once STEP: modifying the configmap a second time STEP: deleting the configmap STEP: creating a watch on configmaps from the resource version returned by the first update STEP: Expecting to observe notifications for all changes to the configmap after the first update May 6 11:56:22.469: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:e2e-tests-watch-96wpv,SelfLink:/api/v1/namespaces/e2e-tests-watch-96wpv/configmaps/e2e-watch-test-resource-version,UID:9ac96734-8f90-11ea-99e8-0242ac110002,ResourceVersion:9040903,Generation:0,CreationTimestamp:2020-05-06 11:56:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} May 6 11:56:22.469: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:e2e-tests-watch-96wpv,SelfLink:/api/v1/namespaces/e2e-tests-watch-96wpv/configmaps/e2e-watch-test-resource-version,UID:9ac96734-8f90-11ea-99e8-0242ac110002,ResourceVersion:9040904,Generation:0,CreationTimestamp:2020-05-06 11:56:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 11:56:22.469: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-watch-96wpv" for this suite. May 6 11:56:28.522: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 11:56:28.639: INFO: namespace: e2e-tests-watch-96wpv, resource: bindings, ignored listing per whitelist May 6 11:56:28.655: INFO: namespace e2e-tests-watch-96wpv deletion completed in 6.146964464s • [SLOW TEST:6.467 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-storage] HostPath should give a volume the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 11:56:28.655: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename hostpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37 [It] should give a volume the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test hostPath mode May 6 11:56:28.773: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "e2e-tests-hostpath-hn62v" to be "success or failure" May 6 11:56:28.776: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.396043ms May 6 11:56:30.780: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006649543s May 6 11:56:32.784: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010865451s May 6 11:56:34.788: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.015276598s STEP: Saw pod success May 6 11:56:34.788: INFO: Pod "pod-host-path-test" satisfied condition "success or failure" May 6 11:56:34.791: INFO: Trying to get logs from node hunter-worker pod pod-host-path-test container test-container-1: STEP: delete the pod May 6 11:56:34.813: INFO: Waiting for pod pod-host-path-test to disappear May 6 11:56:34.818: INFO: Pod pod-host-path-test no longer exists [AfterEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 11:56:34.818: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-hostpath-hn62v" for this suite. May 6 11:56:40.855: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 11:56:40.909: INFO: namespace: e2e-tests-hostpath-hn62v, resource: bindings, ignored listing per whitelist May 6 11:56:40.959: INFO: namespace e2e-tests-hostpath-hn62v deletion completed in 6.138111469s • [SLOW TEST:12.304 seconds] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34 should give a volume the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 11:56:40.959: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-map-a5f29e92-8f90-11ea-b5fe-0242ac110017 STEP: Creating a pod to test consume configMaps May 6 11:56:41.090: INFO: Waiting up to 5m0s for pod "pod-configmaps-a5f715d5-8f90-11ea-b5fe-0242ac110017" in namespace "e2e-tests-configmap-sgst8" to be "success or failure" May 6 11:56:41.095: INFO: Pod "pod-configmaps-a5f715d5-8f90-11ea-b5fe-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 4.467491ms May 6 11:56:43.104: INFO: Pod "pod-configmaps-a5f715d5-8f90-11ea-b5fe-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013982234s May 6 11:56:45.116: INFO: Pod "pod-configmaps-a5f715d5-8f90-11ea-b5fe-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.026009011s STEP: Saw pod success May 6 11:56:45.116: INFO: Pod "pod-configmaps-a5f715d5-8f90-11ea-b5fe-0242ac110017" satisfied condition "success or failure" May 6 11:56:45.119: INFO: Trying to get logs from node hunter-worker pod pod-configmaps-a5f715d5-8f90-11ea-b5fe-0242ac110017 container configmap-volume-test: STEP: delete the pod May 6 11:56:45.143: INFO: Waiting for pod pod-configmaps-a5f715d5-8f90-11ea-b5fe-0242ac110017 to disappear May 6 11:56:45.160: INFO: Pod pod-configmaps-a5f715d5-8f90-11ea-b5fe-0242ac110017 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 11:56:45.160: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-sgst8" for this suite. May 6 11:56:51.492: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 11:56:51.528: INFO: namespace: e2e-tests-configmap-sgst8, resource: bindings, ignored listing per whitelist May 6 11:56:51.574: INFO: namespace e2e-tests-configmap-sgst8 deletion completed in 6.410977444s • [SLOW TEST:10.615 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-cli] Kubectl client [k8s.io] Proxy server should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 11:56:51.574: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: starting the proxy server May 6 11:56:51.667: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter' STEP: curling proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 11:56:51.770: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-l4fxn" for this suite. May 6 11:56:57.798: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 11:56:57.882: INFO: namespace: e2e-tests-kubectl-l4fxn, resource: bindings, ignored listing per whitelist May 6 11:56:57.887: INFO: namespace e2e-tests-kubectl-l4fxn deletion completed in 6.112648441s • [SLOW TEST:6.313 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Proxy server /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 11:56:57.887: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with downward pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod pod-subpath-test-downwardapi-6smh STEP: Creating a pod to test atomic-volume-subpath May 6 11:56:58.133: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-6smh" in namespace "e2e-tests-subpath-z4mzr" to be "success or failure" May 6 11:56:58.142: INFO: Pod "pod-subpath-test-downwardapi-6smh": Phase="Pending", Reason="", readiness=false. Elapsed: 8.576818ms May 6 11:57:00.146: INFO: Pod "pod-subpath-test-downwardapi-6smh": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012900056s May 6 11:57:02.255: INFO: Pod "pod-subpath-test-downwardapi-6smh": Phase="Pending", Reason="", readiness=false. Elapsed: 4.121345502s May 6 11:57:04.272: INFO: Pod "pod-subpath-test-downwardapi-6smh": Phase="Pending", Reason="", readiness=false. Elapsed: 6.139216979s May 6 11:57:06.275: INFO: Pod "pod-subpath-test-downwardapi-6smh": Phase="Pending", Reason="", readiness=false. Elapsed: 8.142032065s May 6 11:57:08.279: INFO: Pod "pod-subpath-test-downwardapi-6smh": Phase="Running", Reason="", readiness=false. Elapsed: 10.145531069s May 6 11:57:10.282: INFO: Pod "pod-subpath-test-downwardapi-6smh": Phase="Running", Reason="", readiness=false. Elapsed: 12.149227685s May 6 11:57:12.286: INFO: Pod "pod-subpath-test-downwardapi-6smh": Phase="Running", Reason="", readiness=false. Elapsed: 14.153299039s May 6 11:57:14.291: INFO: Pod "pod-subpath-test-downwardapi-6smh": Phase="Running", Reason="", readiness=false. Elapsed: 16.157429892s May 6 11:57:16.295: INFO: Pod "pod-subpath-test-downwardapi-6smh": Phase="Running", Reason="", readiness=false. Elapsed: 18.161889358s May 6 11:57:18.300: INFO: Pod "pod-subpath-test-downwardapi-6smh": Phase="Running", Reason="", readiness=false. Elapsed: 20.166375318s May 6 11:57:20.304: INFO: Pod "pod-subpath-test-downwardapi-6smh": Phase="Running", Reason="", readiness=false. Elapsed: 22.170592314s May 6 11:57:22.308: INFO: Pod "pod-subpath-test-downwardapi-6smh": Phase="Running", Reason="", readiness=false. Elapsed: 24.175241781s May 6 11:57:24.312: INFO: Pod "pod-subpath-test-downwardapi-6smh": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.179220018s STEP: Saw pod success May 6 11:57:24.312: INFO: Pod "pod-subpath-test-downwardapi-6smh" satisfied condition "success or failure" May 6 11:57:24.315: INFO: Trying to get logs from node hunter-worker pod pod-subpath-test-downwardapi-6smh container test-container-subpath-downwardapi-6smh: STEP: delete the pod May 6 11:57:24.344: INFO: Waiting for pod pod-subpath-test-downwardapi-6smh to disappear May 6 11:57:24.402: INFO: Pod pod-subpath-test-downwardapi-6smh no longer exists STEP: Deleting pod pod-subpath-test-downwardapi-6smh May 6 11:57:24.402: INFO: Deleting pod "pod-subpath-test-downwardapi-6smh" in namespace "e2e-tests-subpath-z4mzr" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 11:57:24.727: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-subpath-z4mzr" for this suite. May 6 11:57:30.806: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 11:57:30.873: INFO: namespace: e2e-tests-subpath-z4mzr, resource: bindings, ignored listing per whitelist May 6 11:57:30.882: INFO: namespace e2e-tests-subpath-z4mzr deletion completed in 6.150465794s • [SLOW TEST:32.995 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with downward pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run --rm job should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 11:57:30.882: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: executing a command with run --rm and attach with stdin May 6 11:57:31.010: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=e2e-tests-kubectl-t5qwr run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed'' May 6 11:57:34.486: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\nI0506 11:57:34.411710 2918 log.go:172] (0xc0001486e0) (0xc0008446e0) Create stream\nI0506 11:57:34.411767 2918 log.go:172] (0xc0001486e0) (0xc0008446e0) Stream added, broadcasting: 1\nI0506 11:57:34.413881 2918 log.go:172] (0xc0001486e0) Reply frame received for 1\nI0506 11:57:34.413935 2918 log.go:172] (0xc0001486e0) (0xc00087a000) Create stream\nI0506 11:57:34.413959 2918 log.go:172] (0xc0001486e0) (0xc00087a000) Stream added, broadcasting: 3\nI0506 11:57:34.414901 2918 log.go:172] (0xc0001486e0) Reply frame received for 3\nI0506 11:57:34.414965 2918 log.go:172] (0xc0001486e0) (0xc000844780) Create stream\nI0506 11:57:34.414987 2918 log.go:172] (0xc0001486e0) (0xc000844780) Stream added, broadcasting: 5\nI0506 11:57:34.415860 2918 log.go:172] (0xc0001486e0) Reply frame received for 5\nI0506 11:57:34.415908 2918 log.go:172] (0xc0001486e0) (0xc000844820) Create stream\nI0506 11:57:34.415928 2918 log.go:172] (0xc0001486e0) (0xc000844820) Stream added, broadcasting: 7\nI0506 11:57:34.416910 2918 log.go:172] (0xc0001486e0) Reply frame received for 7\nI0506 11:57:34.417362 2918 log.go:172] (0xc00087a000) (3) Writing data frame\nI0506 11:57:34.417542 2918 log.go:172] (0xc00087a000) (3) Writing data frame\nI0506 11:57:34.418542 2918 log.go:172] (0xc0001486e0) Data frame received for 5\nI0506 11:57:34.418561 2918 log.go:172] (0xc000844780) (5) Data frame handling\nI0506 11:57:34.418585 2918 log.go:172] (0xc000844780) (5) Data frame sent\nI0506 11:57:34.419086 2918 log.go:172] (0xc0001486e0) Data frame received for 5\nI0506 11:57:34.419101 2918 log.go:172] (0xc000844780) (5) Data frame handling\nI0506 11:57:34.419114 2918 log.go:172] (0xc000844780) (5) Data frame sent\nI0506 11:57:34.462074 2918 log.go:172] (0xc0001486e0) Data frame received for 7\nI0506 11:57:34.462107 2918 log.go:172] (0xc000844820) (7) Data frame handling\nI0506 11:57:34.462144 2918 log.go:172] (0xc0001486e0) Data frame received for 5\nI0506 11:57:34.462178 2918 log.go:172] (0xc000844780) (5) Data frame handling\nI0506 11:57:34.462576 2918 log.go:172] (0xc0001486e0) (0xc00087a000) Stream removed, broadcasting: 3\nI0506 11:57:34.462737 2918 log.go:172] (0xc0001486e0) Data frame received for 1\nI0506 11:57:34.462757 2918 log.go:172] (0xc0008446e0) (1) Data frame handling\nI0506 11:57:34.462775 2918 log.go:172] (0xc0008446e0) (1) Data frame sent\nI0506 11:57:34.462788 2918 log.go:172] (0xc0001486e0) (0xc0008446e0) Stream removed, broadcasting: 1\nI0506 11:57:34.462876 2918 log.go:172] (0xc0001486e0) (0xc0008446e0) Stream removed, broadcasting: 1\nI0506 11:57:34.462904 2918 log.go:172] (0xc0001486e0) (0xc00087a000) Stream removed, broadcasting: 3\nI0506 11:57:34.462911 2918 log.go:172] (0xc0001486e0) (0xc000844780) Stream removed, broadcasting: 5\nI0506 11:57:34.463015 2918 log.go:172] (0xc0001486e0) Go away received\nI0506 11:57:34.463099 2918 log.go:172] (0xc0001486e0) (0xc000844820) Stream removed, broadcasting: 7\n" May 6 11:57:34.486: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n" STEP: verifying the job e2e-test-rm-busybox-job was deleted [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 11:57:36.493: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-t5qwr" for this suite. May 6 11:57:44.559: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 11:57:44.627: INFO: namespace: e2e-tests-kubectl-t5qwr, resource: bindings, ignored listing per whitelist May 6 11:57:44.633: INFO: namespace e2e-tests-kubectl-t5qwr deletion completed in 8.136315683s • [SLOW TEST:13.751 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run --rm job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 11:57:44.633: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65 [It] RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 6 11:57:44.911: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted) May 6 11:57:45.114: INFO: Pod name sample-pod: Found 0 pods out of 1 May 6 11:57:50.118: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running May 6 11:57:50.118: INFO: Creating deployment "test-rolling-update-deployment" May 6 11:57:50.123: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has May 6 11:57:50.168: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created May 6 11:57:52.175: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected May 6 11:57:52.177: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724363070, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724363070, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724363070, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724363070, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)} May 6 11:57:54.181: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted) [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59 May 6 11:57:54.192: INFO: Deployment "test-rolling-update-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment,GenerateName:,Namespace:e2e-tests-deployment-sgzj4,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-sgzj4/deployments/test-rolling-update-deployment,UID:cf1f11af-8f90-11ea-99e8-0242ac110002,ResourceVersion:9041268,Generation:1,CreationTimestamp:2020-05-06 11:57:50 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-05-06 11:57:50 +0000 UTC 2020-05-06 11:57:50 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-05-06 11:57:53 +0000 UTC 2020-05-06 11:57:50 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rolling-update-deployment-75db98fb4c" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},} May 6 11:57:54.195: INFO: New ReplicaSet "test-rolling-update-deployment-75db98fb4c" of Deployment "test-rolling-update-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-75db98fb4c,GenerateName:,Namespace:e2e-tests-deployment-sgzj4,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-sgzj4/replicasets/test-rolling-update-deployment-75db98fb4c,UID:cf27404f-8f90-11ea-99e8-0242ac110002,ResourceVersion:9041259,Generation:1,CreationTimestamp:2020-05-06 11:57:50 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment cf1f11af-8f90-11ea-99e8-0242ac110002 0xc001ed3677 0xc001ed3678}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} May 6 11:57:54.195: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment": May 6 11:57:54.196: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-controller,GenerateName:,Namespace:e2e-tests-deployment-sgzj4,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-sgzj4/replicasets/test-rolling-update-controller,UID:cc04539f-8f90-11ea-99e8-0242ac110002,ResourceVersion:9041267,Generation:2,CreationTimestamp:2020-05-06 11:57:44 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305832,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment cf1f11af-8f90-11ea-99e8-0242ac110002 0xc001ed313f 0xc001ed3150}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} May 6 11:57:54.199: INFO: Pod "test-rolling-update-deployment-75db98fb4c-vj2jb" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-75db98fb4c-vj2jb,GenerateName:test-rolling-update-deployment-75db98fb4c-,Namespace:e2e-tests-deployment-sgzj4,SelfLink:/api/v1/namespaces/e2e-tests-deployment-sgzj4/pods/test-rolling-update-deployment-75db98fb4c-vj2jb,UID:cf293b01-8f90-11ea-99e8-0242ac110002,ResourceVersion:9041258,Generation:0,CreationTimestamp:2020-05-06 11:57:50 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rolling-update-deployment-75db98fb4c cf27404f-8f90-11ea-99e8-0242ac110002 0xc002689607 0xc002689608}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-f7n28 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-f7n28,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-f7n28 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002689680} {node.kubernetes.io/unreachable Exists NoExecute 0xc0026896a0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 11:57:50 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 11:57:52 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 11:57:52 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 11:57:50 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.4,PodIP:10.244.2.186,StartTime:2020-05-06 11:57:50 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-05-06 11:57:52 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 containerd://553f458eddf88633635716c5d597051c324f062ba9132f956c275e64a742e114}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 11:57:54.199: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-deployment-sgzj4" for this suite. May 6 11:58:00.218: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 11:58:00.407: INFO: namespace: e2e-tests-deployment-sgzj4, resource: bindings, ignored listing per whitelist May 6 11:58:00.419: INFO: namespace e2e-tests-deployment-sgzj4 deletion completed in 6.216870181s • [SLOW TEST:15.786 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 11:58:00.420: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:295 [It] should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a replication controller May 6 11:58:00.544: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-tpjf9' May 6 11:58:00.845: INFO: stderr: "" May 6 11:58:00.845: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. May 6 11:58:00.845: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-tpjf9' May 6 11:58:00.957: INFO: stderr: "" May 6 11:58:00.957: INFO: stdout: "update-demo-nautilus-9wh6w update-demo-nautilus-lsnpb " May 6 11:58:00.957: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9wh6w -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-tpjf9' May 6 11:58:01.069: INFO: stderr: "" May 6 11:58:01.069: INFO: stdout: "" May 6 11:58:01.069: INFO: update-demo-nautilus-9wh6w is created but not running May 6 11:58:06.069: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-tpjf9' May 6 11:58:06.218: INFO: stderr: "" May 6 11:58:06.218: INFO: stdout: "update-demo-nautilus-9wh6w update-demo-nautilus-lsnpb " May 6 11:58:06.219: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9wh6w -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-tpjf9' May 6 11:58:06.333: INFO: stderr: "" May 6 11:58:06.333: INFO: stdout: "true" May 6 11:58:06.333: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9wh6w -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-tpjf9' May 6 11:58:06.436: INFO: stderr: "" May 6 11:58:06.436: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 6 11:58:06.436: INFO: validating pod update-demo-nautilus-9wh6w May 6 11:58:06.440: INFO: got data: { "image": "nautilus.jpg" } May 6 11:58:06.440: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 6 11:58:06.440: INFO: update-demo-nautilus-9wh6w is verified up and running May 6 11:58:06.440: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-lsnpb -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-tpjf9' May 6 11:58:06.546: INFO: stderr: "" May 6 11:58:06.546: INFO: stdout: "true" May 6 11:58:06.546: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-lsnpb -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-tpjf9' May 6 11:58:06.642: INFO: stderr: "" May 6 11:58:06.642: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 6 11:58:06.642: INFO: validating pod update-demo-nautilus-lsnpb May 6 11:58:06.646: INFO: got data: { "image": "nautilus.jpg" } May 6 11:58:06.647: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 6 11:58:06.647: INFO: update-demo-nautilus-lsnpb is verified up and running STEP: scaling down the replication controller May 6 11:58:06.649: INFO: scanned /root for discovery docs: May 6 11:58:06.649: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=e2e-tests-kubectl-tpjf9' May 6 11:58:07.803: INFO: stderr: "" May 6 11:58:07.803: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. May 6 11:58:07.803: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-tpjf9' May 6 11:58:07.899: INFO: stderr: "" May 6 11:58:07.899: INFO: stdout: "update-demo-nautilus-9wh6w update-demo-nautilus-lsnpb " STEP: Replicas for name=update-demo: expected=1 actual=2 May 6 11:58:12.899: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-tpjf9' May 6 11:58:13.021: INFO: stderr: "" May 6 11:58:13.021: INFO: stdout: "update-demo-nautilus-9wh6w update-demo-nautilus-lsnpb " STEP: Replicas for name=update-demo: expected=1 actual=2 May 6 11:58:18.021: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-tpjf9' May 6 11:58:18.117: INFO: stderr: "" May 6 11:58:18.117: INFO: stdout: "update-demo-nautilus-9wh6w update-demo-nautilus-lsnpb " STEP: Replicas for name=update-demo: expected=1 actual=2 May 6 11:58:23.117: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-tpjf9' May 6 11:58:23.226: INFO: stderr: "" May 6 11:58:23.226: INFO: stdout: "update-demo-nautilus-9wh6w " May 6 11:58:23.226: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9wh6w -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-tpjf9' May 6 11:58:23.319: INFO: stderr: "" May 6 11:58:23.319: INFO: stdout: "true" May 6 11:58:23.320: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9wh6w -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-tpjf9' May 6 11:58:23.409: INFO: stderr: "" May 6 11:58:23.409: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 6 11:58:23.409: INFO: validating pod update-demo-nautilus-9wh6w May 6 11:58:23.411: INFO: got data: { "image": "nautilus.jpg" } May 6 11:58:23.411: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 6 11:58:23.411: INFO: update-demo-nautilus-9wh6w is verified up and running STEP: scaling up the replication controller May 6 11:58:23.412: INFO: scanned /root for discovery docs: May 6 11:58:23.412: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=e2e-tests-kubectl-tpjf9' May 6 11:58:24.600: INFO: stderr: "" May 6 11:58:24.600: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. May 6 11:58:24.600: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-tpjf9' May 6 11:58:24.705: INFO: stderr: "" May 6 11:58:24.705: INFO: stdout: "update-demo-nautilus-9wh6w update-demo-nautilus-h6q5f " May 6 11:58:24.705: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9wh6w -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-tpjf9' May 6 11:58:24.793: INFO: stderr: "" May 6 11:58:24.793: INFO: stdout: "true" May 6 11:58:24.793: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9wh6w -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-tpjf9' May 6 11:58:24.891: INFO: stderr: "" May 6 11:58:24.891: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 6 11:58:24.891: INFO: validating pod update-demo-nautilus-9wh6w May 6 11:58:24.894: INFO: got data: { "image": "nautilus.jpg" } May 6 11:58:24.894: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 6 11:58:24.894: INFO: update-demo-nautilus-9wh6w is verified up and running May 6 11:58:24.894: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-h6q5f -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-tpjf9' May 6 11:58:24.990: INFO: stderr: "" May 6 11:58:24.990: INFO: stdout: "" May 6 11:58:24.990: INFO: update-demo-nautilus-h6q5f is created but not running May 6 11:58:29.991: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-tpjf9' May 6 11:58:30.108: INFO: stderr: "" May 6 11:58:30.108: INFO: stdout: "update-demo-nautilus-9wh6w update-demo-nautilus-h6q5f " May 6 11:58:30.108: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9wh6w -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-tpjf9' May 6 11:58:30.210: INFO: stderr: "" May 6 11:58:30.210: INFO: stdout: "true" May 6 11:58:30.210: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9wh6w -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-tpjf9' May 6 11:58:30.299: INFO: stderr: "" May 6 11:58:30.299: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 6 11:58:30.299: INFO: validating pod update-demo-nautilus-9wh6w May 6 11:58:30.302: INFO: got data: { "image": "nautilus.jpg" } May 6 11:58:30.302: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 6 11:58:30.302: INFO: update-demo-nautilus-9wh6w is verified up and running May 6 11:58:30.302: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-h6q5f -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-tpjf9' May 6 11:58:30.396: INFO: stderr: "" May 6 11:58:30.396: INFO: stdout: "true" May 6 11:58:30.396: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-h6q5f -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-tpjf9' May 6 11:58:30.484: INFO: stderr: "" May 6 11:58:30.484: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 6 11:58:30.484: INFO: validating pod update-demo-nautilus-h6q5f May 6 11:58:30.488: INFO: got data: { "image": "nautilus.jpg" } May 6 11:58:30.488: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 6 11:58:30.488: INFO: update-demo-nautilus-h6q5f is verified up and running STEP: using delete to clean up resources May 6 11:58:30.488: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-tpjf9' May 6 11:58:30.589: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 6 11:58:30.589: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" May 6 11:58:30.589: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=e2e-tests-kubectl-tpjf9' May 6 11:58:30.688: INFO: stderr: "No resources found.\n" May 6 11:58:30.688: INFO: stdout: "" May 6 11:58:30.688: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=e2e-tests-kubectl-tpjf9 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 6 11:58:30.793: INFO: stderr: "" May 6 11:58:30.793: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 11:58:30.793: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-tpjf9" for this suite. May 6 11:58:52.830: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 11:58:52.899: INFO: namespace: e2e-tests-kubectl-tpjf9, resource: bindings, ignored listing per whitelist May 6 11:58:52.932: INFO: namespace e2e-tests-kubectl-tpjf9 deletion completed in 22.136155152s • [SLOW TEST:52.513 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 11:58:52.933: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not cause race condition when used for configmaps [Serial] [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating 50 configmaps STEP: Creating RC which spawns configmap-volume pods May 6 11:58:53.675: INFO: Pod name wrapped-volume-race-f4f2a02c-8f90-11ea-b5fe-0242ac110017: Found 0 pods out of 5 May 6 11:58:59.595: INFO: Pod name wrapped-volume-race-f4f2a02c-8f90-11ea-b5fe-0242ac110017: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-f4f2a02c-8f90-11ea-b5fe-0242ac110017 in namespace e2e-tests-emptydir-wrapper-djk7j, will wait for the garbage collector to delete the pods May 6 12:00:41.734: INFO: Deleting ReplicationController wrapped-volume-race-f4f2a02c-8f90-11ea-b5fe-0242ac110017 took: 6.296528ms May 6 12:00:41.934: INFO: Terminating ReplicationController wrapped-volume-race-f4f2a02c-8f90-11ea-b5fe-0242ac110017 pods took: 200.330877ms STEP: Creating RC which spawns configmap-volume pods May 6 12:01:21.909: INFO: Pod name wrapped-volume-race-4d40ea80-8f91-11ea-b5fe-0242ac110017: Found 0 pods out of 5 May 6 12:01:26.918: INFO: Pod name wrapped-volume-race-4d40ea80-8f91-11ea-b5fe-0242ac110017: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-4d40ea80-8f91-11ea-b5fe-0242ac110017 in namespace e2e-tests-emptydir-wrapper-djk7j, will wait for the garbage collector to delete the pods May 6 12:03:24.046: INFO: Deleting ReplicationController wrapped-volume-race-4d40ea80-8f91-11ea-b5fe-0242ac110017 took: 7.826107ms May 6 12:03:24.246: INFO: Terminating ReplicationController wrapped-volume-race-4d40ea80-8f91-11ea-b5fe-0242ac110017 pods took: 200.274181ms STEP: Creating RC which spawns configmap-volume pods May 6 12:04:11.491: INFO: Pod name wrapped-volume-race-b268e5c5-8f91-11ea-b5fe-0242ac110017: Found 0 pods out of 5 May 6 12:04:16.528: INFO: Pod name wrapped-volume-race-b268e5c5-8f91-11ea-b5fe-0242ac110017: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-b268e5c5-8f91-11ea-b5fe-0242ac110017 in namespace e2e-tests-emptydir-wrapper-djk7j, will wait for the garbage collector to delete the pods May 6 12:07:02.612: INFO: Deleting ReplicationController wrapped-volume-race-b268e5c5-8f91-11ea-b5fe-0242ac110017 took: 8.832903ms May 6 12:07:02.713: INFO: Terminating ReplicationController wrapped-volume-race-b268e5c5-8f91-11ea-b5fe-0242ac110017 pods took: 100.465954ms STEP: Cleaning up the configMaps [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 12:07:42.977: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-wrapper-djk7j" for this suite. May 6 12:07:51.071: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 12:07:51.124: INFO: namespace: e2e-tests-emptydir-wrapper-djk7j, resource: bindings, ignored listing per whitelist May 6 12:07:51.145: INFO: namespace e2e-tests-emptydir-wrapper-djk7j deletion completed in 8.121107359s • [SLOW TEST:538.212 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 should not cause race condition when used for configmaps [Serial] [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 12:07:51.146: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin May 6 12:07:51.268: INFO: Waiting up to 5m0s for pod "downwardapi-volume-356e2840-8f92-11ea-b5fe-0242ac110017" in namespace "e2e-tests-downward-api-jmjxt" to be "success or failure" May 6 12:07:51.461: INFO: Pod "downwardapi-volume-356e2840-8f92-11ea-b5fe-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 193.004118ms May 6 12:07:53.466: INFO: Pod "downwardapi-volume-356e2840-8f92-11ea-b5fe-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.197781278s May 6 12:07:55.545: INFO: Pod "downwardapi-volume-356e2840-8f92-11ea-b5fe-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 4.276918795s May 6 12:07:57.549: INFO: Pod "downwardapi-volume-356e2840-8f92-11ea-b5fe-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.281308148s STEP: Saw pod success May 6 12:07:57.550: INFO: Pod "downwardapi-volume-356e2840-8f92-11ea-b5fe-0242ac110017" satisfied condition "success or failure" May 6 12:07:57.552: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-356e2840-8f92-11ea-b5fe-0242ac110017 container client-container: STEP: delete the pod May 6 12:07:57.713: INFO: Waiting for pod downwardapi-volume-356e2840-8f92-11ea-b5fe-0242ac110017 to disappear May 6 12:07:57.716: INFO: Pod downwardapi-volume-356e2840-8f92-11ea-b5fe-0242ac110017 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 12:07:57.716: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-jmjxt" for this suite. May 6 12:08:03.803: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 12:08:03.926: INFO: namespace: e2e-tests-downward-api-jmjxt, resource: bindings, ignored listing per whitelist May 6 12:08:03.948: INFO: namespace e2e-tests-downward-api-jmjxt deletion completed in 6.164265534s • [SLOW TEST:12.802 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 12:08:03.948: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a pod in the namespace STEP: Waiting for the pod to have running status STEP: Creating an uninitialized pod in the namespace May 6 12:08:10.282: INFO: error from create uninitialized namespace: STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there are no pods in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 12:08:34.347: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-namespaces-bz6rh" for this suite. May 6 12:08:40.360: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 12:08:40.520: INFO: namespace: e2e-tests-namespaces-bz6rh, resource: bindings, ignored listing per whitelist May 6 12:08:40.534: INFO: namespace e2e-tests-namespaces-bz6rh deletion completed in 6.184845885s STEP: Destroying namespace "e2e-tests-nsdeletetest-8x5wh" for this suite. May 6 12:08:40.536: INFO: Namespace e2e-tests-nsdeletetest-8x5wh was already deleted STEP: Destroying namespace "e2e-tests-nsdeletetest-csgff" for this suite. May 6 12:08:46.554: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 12:08:46.571: INFO: namespace: e2e-tests-nsdeletetest-csgff, resource: bindings, ignored listing per whitelist May 6 12:08:46.621: INFO: namespace e2e-tests-nsdeletetest-csgff deletion completed in 6.085172161s • [SLOW TEST:42.673 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 12:08:46.622: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-map-571add2a-8f92-11ea-b5fe-0242ac110017 STEP: Creating a pod to test consume secrets May 6 12:08:48.014: INFO: Waiting up to 5m0s for pod "pod-secrets-5740adc3-8f92-11ea-b5fe-0242ac110017" in namespace "e2e-tests-secrets-wgf6j" to be "success or failure" May 6 12:08:48.332: INFO: Pod "pod-secrets-5740adc3-8f92-11ea-b5fe-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 318.129152ms May 6 12:08:50.636: INFO: Pod "pod-secrets-5740adc3-8f92-11ea-b5fe-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.621840593s May 6 12:08:52.640: INFO: Pod "pod-secrets-5740adc3-8f92-11ea-b5fe-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 4.625668473s May 6 12:08:54.643: INFO: Pod "pod-secrets-5740adc3-8f92-11ea-b5fe-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 6.629431662s May 6 12:08:56.647: INFO: Pod "pod-secrets-5740adc3-8f92-11ea-b5fe-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.633082588s STEP: Saw pod success May 6 12:08:56.647: INFO: Pod "pod-secrets-5740adc3-8f92-11ea-b5fe-0242ac110017" satisfied condition "success or failure" May 6 12:08:56.650: INFO: Trying to get logs from node hunter-worker2 pod pod-secrets-5740adc3-8f92-11ea-b5fe-0242ac110017 container secret-volume-test: STEP: delete the pod May 6 12:08:56.784: INFO: Waiting for pod pod-secrets-5740adc3-8f92-11ea-b5fe-0242ac110017 to disappear May 6 12:08:56.789: INFO: Pod pod-secrets-5740adc3-8f92-11ea-b5fe-0242ac110017 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 12:08:56.789: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-wgf6j" for this suite. May 6 12:09:02.804: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 12:09:02.828: INFO: namespace: e2e-tests-secrets-wgf6j, resource: bindings, ignored listing per whitelist May 6 12:09:02.873: INFO: namespace e2e-tests-secrets-wgf6j deletion completed in 6.081407433s • [SLOW TEST:16.252 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 12:09:02.874: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-map-6040af87-8f92-11ea-b5fe-0242ac110017 STEP: Creating a pod to test consume configMaps May 6 12:09:03.342: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-6049fe4f-8f92-11ea-b5fe-0242ac110017" in namespace "e2e-tests-projected-bkp7z" to be "success or failure" May 6 12:09:03.504: INFO: Pod "pod-projected-configmaps-6049fe4f-8f92-11ea-b5fe-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 161.792964ms May 6 12:09:05.508: INFO: Pod "pod-projected-configmaps-6049fe4f-8f92-11ea-b5fe-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.166142021s May 6 12:09:07.511: INFO: Pod "pod-projected-configmaps-6049fe4f-8f92-11ea-b5fe-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.168938979s STEP: Saw pod success May 6 12:09:07.511: INFO: Pod "pod-projected-configmaps-6049fe4f-8f92-11ea-b5fe-0242ac110017" satisfied condition "success or failure" May 6 12:09:07.517: INFO: Trying to get logs from node hunter-worker2 pod pod-projected-configmaps-6049fe4f-8f92-11ea-b5fe-0242ac110017 container projected-configmap-volume-test: STEP: delete the pod May 6 12:09:07.546: INFO: Waiting for pod pod-projected-configmaps-6049fe4f-8f92-11ea-b5fe-0242ac110017 to disappear May 6 12:09:07.555: INFO: Pod pod-projected-configmaps-6049fe4f-8f92-11ea-b5fe-0242ac110017 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 12:09:07.556: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-bkp7z" for this suite. May 6 12:09:13.837: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 12:09:13.859: INFO: namespace: e2e-tests-projected-bkp7z, resource: bindings, ignored listing per whitelist May 6 12:09:13.913: INFO: namespace e2e-tests-projected-bkp7z deletion completed in 6.354757564s • [SLOW TEST:11.039 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 12:09:13.913: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap e2e-tests-configmap-txhsj/configmap-test-66c0b1e9-8f92-11ea-b5fe-0242ac110017 STEP: Creating a pod to test consume configMaps May 6 12:09:14.054: INFO: Waiting up to 5m0s for pod "pod-configmaps-66c63809-8f92-11ea-b5fe-0242ac110017" in namespace "e2e-tests-configmap-txhsj" to be "success or failure" May 6 12:09:14.084: INFO: Pod "pod-configmaps-66c63809-8f92-11ea-b5fe-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 29.745924ms May 6 12:09:16.234: INFO: Pod "pod-configmaps-66c63809-8f92-11ea-b5fe-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.180219351s May 6 12:09:18.239: INFO: Pod "pod-configmaps-66c63809-8f92-11ea-b5fe-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 4.184817373s May 6 12:09:20.243: INFO: Pod "pod-configmaps-66c63809-8f92-11ea-b5fe-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.188587523s STEP: Saw pod success May 6 12:09:20.243: INFO: Pod "pod-configmaps-66c63809-8f92-11ea-b5fe-0242ac110017" satisfied condition "success or failure" May 6 12:09:20.246: INFO: Trying to get logs from node hunter-worker pod pod-configmaps-66c63809-8f92-11ea-b5fe-0242ac110017 container env-test: STEP: delete the pod May 6 12:09:20.476: INFO: Waiting for pod pod-configmaps-66c63809-8f92-11ea-b5fe-0242ac110017 to disappear May 6 12:09:20.539: INFO: Pod pod-configmaps-66c63809-8f92-11ea-b5fe-0242ac110017 no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 12:09:20.539: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-txhsj" for this suite. May 6 12:09:26.569: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 12:09:26.610: INFO: namespace: e2e-tests-configmap-txhsj, resource: bindings, ignored listing per whitelist May 6 12:09:26.646: INFO: namespace e2e-tests-configmap-txhsj deletion completed in 6.103437169s • [SLOW TEST:12.733 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Service endpoints latency should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 12:09:26.647: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svc-latency STEP: Waiting for a default service account to be provisioned in namespace [It] should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating replication controller svc-latency-rc in namespace e2e-tests-svc-latency-hzgfg I0506 12:09:26.750147 7 runners.go:184] Created replication controller with name: svc-latency-rc, namespace: e2e-tests-svc-latency-hzgfg, replica count: 1 I0506 12:09:27.800573 7 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0506 12:09:28.800776 7 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0506 12:09:29.800945 7 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0506 12:09:30.801105 7 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 6 12:09:30.931: INFO: Created: latency-svc-m84h6 May 6 12:09:30.951: INFO: Got endpoints: latency-svc-m84h6 [50.154974ms] May 6 12:09:30.998: INFO: Created: latency-svc-hwtk8 May 6 12:09:31.007: INFO: Got endpoints: latency-svc-hwtk8 [55.554223ms] May 6 12:09:31.051: INFO: Created: latency-svc-6jh6v May 6 12:09:31.235: INFO: Got endpoints: latency-svc-6jh6v [284.060855ms] May 6 12:09:31.239: INFO: Created: latency-svc-bn5gz May 6 12:09:31.253: INFO: Got endpoints: latency-svc-bn5gz [302.079931ms] May 6 12:09:31.566: INFO: Created: latency-svc-9gmsp May 6 12:09:31.612: INFO: Got endpoints: latency-svc-9gmsp [660.671257ms] May 6 12:09:31.777: INFO: Created: latency-svc-q8s9g May 6 12:09:31.780: INFO: Got endpoints: latency-svc-q8s9g [829.055864ms] May 6 12:09:31.977: INFO: Created: latency-svc-4nlhz May 6 12:09:31.981: INFO: Got endpoints: latency-svc-4nlhz [1.029633146s] May 6 12:09:32.138: INFO: Created: latency-svc-xdl68 May 6 12:09:32.218: INFO: Got endpoints: latency-svc-xdl68 [1.266434799s] May 6 12:09:32.331: INFO: Created: latency-svc-9c9tm May 6 12:09:32.380: INFO: Got endpoints: latency-svc-9c9tm [1.428561397s] May 6 12:09:32.511: INFO: Created: latency-svc-br6r5 May 6 12:09:32.515: INFO: Got endpoints: latency-svc-br6r5 [1.563826227s] May 6 12:09:32.569: INFO: Created: latency-svc-s2vgb May 6 12:09:32.590: INFO: Got endpoints: latency-svc-s2vgb [1.638478348s] May 6 12:09:32.654: INFO: Created: latency-svc-9mhvm May 6 12:09:32.668: INFO: Got endpoints: latency-svc-9mhvm [1.716890102s] May 6 12:09:32.691: INFO: Created: latency-svc-hhps8 May 6 12:09:32.722: INFO: Got endpoints: latency-svc-hhps8 [1.77074137s] May 6 12:09:32.798: INFO: Created: latency-svc-tklbq May 6 12:09:32.801: INFO: Got endpoints: latency-svc-tklbq [1.849628394s] May 6 12:09:32.885: INFO: Created: latency-svc-vjt5v May 6 12:09:32.953: INFO: Got endpoints: latency-svc-vjt5v [2.001720021s] May 6 12:09:32.963: INFO: Created: latency-svc-6gst5 May 6 12:09:32.999: INFO: Got endpoints: latency-svc-6gst5 [2.04780197s] May 6 12:09:33.097: INFO: Created: latency-svc-42zkz May 6 12:09:33.101: INFO: Got endpoints: latency-svc-42zkz [2.093690134s] May 6 12:09:33.155: INFO: Created: latency-svc-7mfxc May 6 12:09:33.173: INFO: Got endpoints: latency-svc-7mfxc [1.937791579s] May 6 12:09:33.235: INFO: Created: latency-svc-ncmwk May 6 12:09:33.238: INFO: Got endpoints: latency-svc-ncmwk [1.984174253s] May 6 12:09:33.283: INFO: Created: latency-svc-lzjm4 May 6 12:09:33.312: INFO: Got endpoints: latency-svc-lzjm4 [1.69940708s] May 6 12:09:33.391: INFO: Created: latency-svc-klqs9 May 6 12:09:33.396: INFO: Got endpoints: latency-svc-klqs9 [1.616054068s] May 6 12:09:33.424: INFO: Created: latency-svc-tsh22 May 6 12:09:33.433: INFO: Got endpoints: latency-svc-tsh22 [1.452101766s] May 6 12:09:33.467: INFO: Created: latency-svc-xfv77 May 6 12:09:33.534: INFO: Got endpoints: latency-svc-xfv77 [1.315753221s] May 6 12:09:33.548: INFO: Created: latency-svc-g2gvn May 6 12:09:33.567: INFO: Got endpoints: latency-svc-g2gvn [1.186699456s] May 6 12:09:33.599: INFO: Created: latency-svc-c5whp May 6 12:09:33.620: INFO: Got endpoints: latency-svc-c5whp [1.104455189s] May 6 12:09:33.713: INFO: Created: latency-svc-pk4rc May 6 12:09:33.745: INFO: Got endpoints: latency-svc-pk4rc [1.155289312s] May 6 12:09:33.775: INFO: Created: latency-svc-p7kln May 6 12:09:33.788: INFO: Got endpoints: latency-svc-p7kln [1.11974542s] May 6 12:09:33.869: INFO: Created: latency-svc-g7276 May 6 12:09:33.872: INFO: Got endpoints: latency-svc-g7276 [1.149763687s] May 6 12:09:33.917: INFO: Created: latency-svc-5p9ww May 6 12:09:33.933: INFO: Got endpoints: latency-svc-5p9ww [1.132105873s] May 6 12:09:34.019: INFO: Created: latency-svc-rtg9h May 6 12:09:34.023: INFO: Got endpoints: latency-svc-rtg9h [1.069592881s] May 6 12:09:34.051: INFO: Created: latency-svc-cl62r May 6 12:09:34.059: INFO: Got endpoints: latency-svc-cl62r [1.059606726s] May 6 12:09:34.087: INFO: Created: latency-svc-slb7g May 6 12:09:34.095: INFO: Got endpoints: latency-svc-slb7g [994.627306ms] May 6 12:09:34.187: INFO: Created: latency-svc-lp6wg May 6 12:09:34.191: INFO: Got endpoints: latency-svc-lp6wg [1.017441299s] May 6 12:09:34.240: INFO: Created: latency-svc-6xd9c May 6 12:09:34.273: INFO: Got endpoints: latency-svc-6xd9c [1.035431373s] May 6 12:09:34.336: INFO: Created: latency-svc-7gdff May 6 12:09:34.340: INFO: Got endpoints: latency-svc-7gdff [1.027968014s] May 6 12:09:34.388: INFO: Created: latency-svc-7jq4n May 6 12:09:34.408: INFO: Got endpoints: latency-svc-7jq4n [1.011602349s] May 6 12:09:34.468: INFO: Created: latency-svc-tqb58 May 6 12:09:34.492: INFO: Got endpoints: latency-svc-tqb58 [1.059211598s] May 6 12:09:34.537: INFO: Created: latency-svc-jhrbn May 6 12:09:34.547: INFO: Got endpoints: latency-svc-jhrbn [1.012926139s] May 6 12:09:34.600: INFO: Created: latency-svc-gjnp7 May 6 12:09:34.603: INFO: Got endpoints: latency-svc-gjnp7 [1.036629817s] May 6 12:09:34.633: INFO: Created: latency-svc-jfvzc May 6 12:09:34.643: INFO: Got endpoints: latency-svc-jfvzc [1.023815703s] May 6 12:09:34.666: INFO: Created: latency-svc-6l7qv May 6 12:09:34.674: INFO: Got endpoints: latency-svc-6l7qv [928.315704ms] May 6 12:09:34.697: INFO: Created: latency-svc-2v5gf May 6 12:09:34.750: INFO: Got endpoints: latency-svc-2v5gf [961.51034ms] May 6 12:09:34.771: INFO: Created: latency-svc-h5p6d May 6 12:09:34.782: INFO: Got endpoints: latency-svc-h5p6d [910.347397ms] May 6 12:09:34.832: INFO: Created: latency-svc-fjjs2 May 6 12:09:34.842: INFO: Got endpoints: latency-svc-fjjs2 [908.537023ms] May 6 12:09:34.887: INFO: Created: latency-svc-5n526 May 6 12:09:34.890: INFO: Got endpoints: latency-svc-5n526 [867.080559ms] May 6 12:09:34.918: INFO: Created: latency-svc-l2vlb May 6 12:09:34.933: INFO: Got endpoints: latency-svc-l2vlb [873.873185ms] May 6 12:09:34.972: INFO: Created: latency-svc-g875t May 6 12:09:35.049: INFO: Got endpoints: latency-svc-g875t [953.856151ms] May 6 12:09:35.054: INFO: Created: latency-svc-vfxrv May 6 12:09:35.077: INFO: Got endpoints: latency-svc-vfxrv [886.490493ms] May 6 12:09:35.107: INFO: Created: latency-svc-brlch May 6 12:09:35.125: INFO: Got endpoints: latency-svc-brlch [852.312247ms] May 6 12:09:35.146: INFO: Created: latency-svc-jx2mx May 6 12:09:35.217: INFO: Got endpoints: latency-svc-jx2mx [877.362279ms] May 6 12:09:35.219: INFO: Created: latency-svc-7b6zx May 6 12:09:35.248: INFO: Got endpoints: latency-svc-7b6zx [839.845938ms] May 6 12:09:35.299: INFO: Created: latency-svc-zgjqn May 6 12:09:35.384: INFO: Got endpoints: latency-svc-zgjqn [891.23195ms] May 6 12:09:35.387: INFO: Created: latency-svc-hxkqt May 6 12:09:35.396: INFO: Got endpoints: latency-svc-hxkqt [849.599503ms] May 6 12:09:35.437: INFO: Created: latency-svc-8x2xg May 6 12:09:35.450: INFO: Got endpoints: latency-svc-8x2xg [846.981856ms] May 6 12:09:35.535: INFO: Created: latency-svc-vlpjb May 6 12:09:35.540: INFO: Got endpoints: latency-svc-vlpjb [896.550062ms] May 6 12:09:35.580: INFO: Created: latency-svc-2bx7p May 6 12:09:35.584: INFO: Got endpoints: latency-svc-2bx7p [910.3075ms] May 6 12:09:35.616: INFO: Created: latency-svc-frrxz May 6 12:09:35.701: INFO: Got endpoints: latency-svc-frrxz [951.703225ms] May 6 12:09:35.706: INFO: Created: latency-svc-6cdr8 May 6 12:09:35.746: INFO: Got endpoints: latency-svc-6cdr8 [963.105325ms] May 6 12:09:35.788: INFO: Created: latency-svc-nglp2 May 6 12:09:35.851: INFO: Got endpoints: latency-svc-nglp2 [1.009086134s] May 6 12:09:35.866: INFO: Created: latency-svc-h5q74 May 6 12:09:35.884: INFO: Got endpoints: latency-svc-h5q74 [994.270085ms] May 6 12:09:35.928: INFO: Created: latency-svc-w9xdm May 6 12:09:35.977: INFO: Got endpoints: latency-svc-w9xdm [1.043927068s] May 6 12:09:35.988: INFO: Created: latency-svc-wg5jd May 6 12:09:36.022: INFO: Got endpoints: latency-svc-wg5jd [973.303528ms] May 6 12:09:36.066: INFO: Created: latency-svc-7f57p May 6 12:09:36.126: INFO: Got endpoints: latency-svc-7f57p [1.048947097s] May 6 12:09:36.127: INFO: Created: latency-svc-7krrr May 6 12:09:36.130: INFO: Got endpoints: latency-svc-7krrr [1.004508984s] May 6 12:09:36.159: INFO: Created: latency-svc-v6pzm May 6 12:09:36.173: INFO: Got endpoints: latency-svc-v6pzm [956.018616ms] May 6 12:09:36.192: INFO: Created: latency-svc-jszl8 May 6 12:09:36.216: INFO: Got endpoints: latency-svc-jszl8 [967.518762ms] May 6 12:09:36.286: INFO: Created: latency-svc-bsbl6 May 6 12:09:36.287: INFO: Got endpoints: latency-svc-bsbl6 [903.15783ms] May 6 12:09:36.364: INFO: Created: latency-svc-5gjxn May 6 12:09:36.414: INFO: Got endpoints: latency-svc-5gjxn [1.017563265s] May 6 12:09:36.450: INFO: Created: latency-svc-zlkl6 May 6 12:09:36.468: INFO: Got endpoints: latency-svc-zlkl6 [1.017161663s] May 6 12:09:36.492: INFO: Created: latency-svc-qmqbc May 6 12:09:36.504: INFO: Got endpoints: latency-svc-qmqbc [963.793528ms] May 6 12:09:36.593: INFO: Created: latency-svc-v4c56 May 6 12:09:36.606: INFO: Got endpoints: latency-svc-v4c56 [1.022487541s] May 6 12:09:36.654: INFO: Created: latency-svc-vp2jh May 6 12:09:36.666: INFO: Got endpoints: latency-svc-vp2jh [964.565987ms] May 6 12:09:36.738: INFO: Created: latency-svc-jk5lj May 6 12:09:36.744: INFO: Got endpoints: latency-svc-jk5lj [998.712394ms] May 6 12:09:36.767: INFO: Created: latency-svc-t2c9w May 6 12:09:36.802: INFO: Got endpoints: latency-svc-t2c9w [950.899389ms] May 6 12:09:36.834: INFO: Created: latency-svc-qvz62 May 6 12:09:36.881: INFO: Got endpoints: latency-svc-qvz62 [996.60516ms] May 6 12:09:36.894: INFO: Created: latency-svc-pvg8s May 6 12:09:36.919: INFO: Got endpoints: latency-svc-pvg8s [942.544379ms] May 6 12:09:36.945: INFO: Created: latency-svc-97lkg May 6 12:09:36.968: INFO: Got endpoints: latency-svc-97lkg [945.167706ms] May 6 12:09:37.019: INFO: Created: latency-svc-sn849 May 6 12:09:37.034: INFO: Got endpoints: latency-svc-sn849 [907.571451ms] May 6 12:09:37.062: INFO: Created: latency-svc-gjqr5 May 6 12:09:37.082: INFO: Got endpoints: latency-svc-gjqr5 [952.262927ms] May 6 12:09:37.116: INFO: Created: latency-svc-87q5h May 6 12:09:37.180: INFO: Got endpoints: latency-svc-87q5h [1.007313463s] May 6 12:09:37.182: INFO: Created: latency-svc-jgc9h May 6 12:09:37.196: INFO: Got endpoints: latency-svc-jgc9h [980.883416ms] May 6 12:09:37.267: INFO: Created: latency-svc-brlk4 May 6 12:09:37.414: INFO: Got endpoints: latency-svc-brlk4 [1.126947832s] May 6 12:09:37.432: INFO: Created: latency-svc-pzcjz May 6 12:09:37.455: INFO: Got endpoints: latency-svc-pzcjz [1.040826353s] May 6 12:09:37.590: INFO: Created: latency-svc-ghhj7 May 6 12:09:37.594: INFO: Got endpoints: latency-svc-ghhj7 [1.126644802s] May 6 12:09:37.731: INFO: Created: latency-svc-79cg7 May 6 12:09:37.734: INFO: Got endpoints: latency-svc-79cg7 [1.230061239s] May 6 12:09:37.804: INFO: Created: latency-svc-lb97s May 6 12:09:37.815: INFO: Got endpoints: latency-svc-lb97s [1.208810738s] May 6 12:09:37.968: INFO: Created: latency-svc-f97mg May 6 12:09:38.001: INFO: Got endpoints: latency-svc-f97mg [1.335439102s] May 6 12:09:38.133: INFO: Created: latency-svc-bjw45 May 6 12:09:38.137: INFO: Got endpoints: latency-svc-bjw45 [1.392215706s] May 6 12:09:38.184: INFO: Created: latency-svc-sgrd6 May 6 12:09:38.199: INFO: Got endpoints: latency-svc-sgrd6 [1.397255844s] May 6 12:09:38.230: INFO: Created: latency-svc-lwjj4 May 6 12:09:38.282: INFO: Got endpoints: latency-svc-lwjj4 [1.4010885s] May 6 12:09:38.289: INFO: Created: latency-svc-qkztb May 6 12:09:38.295: INFO: Got endpoints: latency-svc-qkztb [1.376019993s] May 6 12:09:38.342: INFO: Created: latency-svc-8gbk5 May 6 12:09:38.362: INFO: Got endpoints: latency-svc-8gbk5 [1.394483976s] May 6 12:09:38.426: INFO: Created: latency-svc-fsrpm May 6 12:09:38.452: INFO: Got endpoints: latency-svc-fsrpm [1.418028428s] May 6 12:09:38.833: INFO: Created: latency-svc-trsht May 6 12:09:38.947: INFO: Got endpoints: latency-svc-trsht [1.86489067s] May 6 12:09:38.993: INFO: Created: latency-svc-qmdvq May 6 12:09:39.034: INFO: Got endpoints: latency-svc-qmdvq [1.853873847s] May 6 12:09:39.103: INFO: Created: latency-svc-nsxpv May 6 12:09:39.118: INFO: Got endpoints: latency-svc-nsxpv [1.921542652s] May 6 12:09:39.439: INFO: Created: latency-svc-fnmpk May 6 12:09:39.441: INFO: Got endpoints: latency-svc-fnmpk [2.027387566s] May 6 12:09:39.478: INFO: Created: latency-svc-wbdlz May 6 12:09:39.484: INFO: Got endpoints: latency-svc-wbdlz [2.028913375s] May 6 12:09:39.510: INFO: Created: latency-svc-v5z4x May 6 12:09:39.527: INFO: Got endpoints: latency-svc-v5z4x [1.932190011s] May 6 12:09:39.626: INFO: Created: latency-svc-gd7v9 May 6 12:09:39.635: INFO: Got endpoints: latency-svc-gd7v9 [1.900731834s] May 6 12:09:39.805: INFO: Created: latency-svc-x8s4d May 6 12:09:39.833: INFO: Got endpoints: latency-svc-x8s4d [2.017733782s] May 6 12:09:39.857: INFO: Created: latency-svc-djxcv May 6 12:09:39.863: INFO: Got endpoints: latency-svc-djxcv [1.861246118s] May 6 12:09:39.889: INFO: Created: latency-svc-hvqwp May 6 12:09:39.935: INFO: Got endpoints: latency-svc-hvqwp [1.797989791s] May 6 12:09:39.978: INFO: Created: latency-svc-82whq May 6 12:09:40.002: INFO: Got endpoints: latency-svc-82whq [1.802807877s] May 6 12:09:40.127: INFO: Created: latency-svc-zj22g May 6 12:09:40.169: INFO: Got endpoints: latency-svc-zj22g [1.886760416s] May 6 12:09:40.344: INFO: Created: latency-svc-lsjr7 May 6 12:09:40.414: INFO: Got endpoints: latency-svc-lsjr7 [2.118268393s] May 6 12:09:40.629: INFO: Created: latency-svc-l46rf May 6 12:09:40.736: INFO: Got endpoints: latency-svc-l46rf [2.373317299s] May 6 12:09:40.784: INFO: Created: latency-svc-4848r May 6 12:09:40.791: INFO: Got endpoints: latency-svc-4848r [2.338586449s] May 6 12:09:40.964: INFO: Created: latency-svc-hnj46 May 6 12:09:41.169: INFO: Created: latency-svc-ltqn5 May 6 12:09:41.222: INFO: Got endpoints: latency-svc-hnj46 [2.27443315s] May 6 12:09:41.327: INFO: Created: latency-svc-9km8x May 6 12:09:41.337: INFO: Got endpoints: latency-svc-9km8x [2.219200321s] May 6 12:09:41.370: INFO: Got endpoints: latency-svc-ltqn5 [2.335109535s] May 6 12:09:41.371: INFO: Created: latency-svc-km8sh May 6 12:09:41.379: INFO: Got endpoints: latency-svc-km8sh [1.937508345s] May 6 12:09:41.402: INFO: Created: latency-svc-r8tzj May 6 12:09:41.459: INFO: Got endpoints: latency-svc-r8tzj [1.974834966s] May 6 12:09:41.504: INFO: Created: latency-svc-4nqfc May 6 12:09:41.511: INFO: Got endpoints: latency-svc-4nqfc [1.984755402s] May 6 12:09:41.660: INFO: Created: latency-svc-ldvlc May 6 12:09:41.673: INFO: Got endpoints: latency-svc-ldvlc [2.038418357s] May 6 12:09:41.721: INFO: Created: latency-svc-jtxvx May 6 12:09:41.740: INFO: Got endpoints: latency-svc-jtxvx [1.906700283s] May 6 12:09:41.810: INFO: Created: latency-svc-fqpl4 May 6 12:09:41.814: INFO: Got endpoints: latency-svc-fqpl4 [1.950941986s] May 6 12:09:41.838: INFO: Created: latency-svc-958pw May 6 12:09:41.856: INFO: Got endpoints: latency-svc-958pw [1.921033908s] May 6 12:09:41.890: INFO: Created: latency-svc-5ff9m May 6 12:09:41.902: INFO: Got endpoints: latency-svc-5ff9m [1.900167626s] May 6 12:09:41.953: INFO: Created: latency-svc-4w4np May 6 12:09:41.957: INFO: Got endpoints: latency-svc-4w4np [1.787577715s] May 6 12:09:42.021: INFO: Created: latency-svc-87r6z May 6 12:09:42.042: INFO: Got endpoints: latency-svc-87r6z [1.628491498s] May 6 12:09:42.127: INFO: Created: latency-svc-2944v May 6 12:09:42.130: INFO: Got endpoints: latency-svc-2944v [1.393850658s] May 6 12:09:42.180: INFO: Created: latency-svc-gddjf May 6 12:09:42.191: INFO: Got endpoints: latency-svc-gddjf [1.400683513s] May 6 12:09:42.212: INFO: Created: latency-svc-6pmbc May 6 12:09:42.282: INFO: Got endpoints: latency-svc-6pmbc [1.060302053s] May 6 12:09:42.284: INFO: Created: latency-svc-4rkbh May 6 12:09:42.300: INFO: Got endpoints: latency-svc-4rkbh [962.665108ms] May 6 12:09:42.336: INFO: Created: latency-svc-4xxzq May 6 12:09:42.348: INFO: Got endpoints: latency-svc-4xxzq [978.359303ms] May 6 12:09:42.438: INFO: Created: latency-svc-l2r2t May 6 12:09:42.458: INFO: Got endpoints: latency-svc-l2r2t [1.078874999s] May 6 12:09:42.507: INFO: Created: latency-svc-8l4xr May 6 12:09:42.530: INFO: Got endpoints: latency-svc-8l4xr [1.070989016s] May 6 12:09:42.613: INFO: Created: latency-svc-48vjf May 6 12:09:42.624: INFO: Got endpoints: latency-svc-48vjf [1.112980979s] May 6 12:09:42.662: INFO: Created: latency-svc-wrqpw May 6 12:09:42.679: INFO: Got endpoints: latency-svc-wrqpw [1.005229336s] May 6 12:09:42.710: INFO: Created: latency-svc-59td7 May 6 12:09:42.791: INFO: Got endpoints: latency-svc-59td7 [1.05128925s] May 6 12:09:42.793: INFO: Created: latency-svc-qgmdg May 6 12:09:42.811: INFO: Got endpoints: latency-svc-qgmdg [997.440026ms] May 6 12:09:42.858: INFO: Created: latency-svc-czp6d May 6 12:09:42.882: INFO: Got endpoints: latency-svc-czp6d [1.026139113s] May 6 12:09:42.923: INFO: Created: latency-svc-lz5pp May 6 12:09:42.932: INFO: Got endpoints: latency-svc-lz5pp [1.029189897s] May 6 12:09:42.992: INFO: Created: latency-svc-p6vgb May 6 12:09:43.016: INFO: Got endpoints: latency-svc-p6vgb [1.059073645s] May 6 12:09:43.098: INFO: Created: latency-svc-4pltj May 6 12:09:43.112: INFO: Got endpoints: latency-svc-4pltj [1.069883033s] May 6 12:09:43.146: INFO: Created: latency-svc-lxffs May 6 12:09:43.160: INFO: Got endpoints: latency-svc-lxffs [1.030524912s] May 6 12:09:43.190: INFO: Created: latency-svc-4srvm May 6 12:09:43.270: INFO: Got endpoints: latency-svc-4srvm [1.078769642s] May 6 12:09:43.275: INFO: Created: latency-svc-qn5qp May 6 12:09:43.295: INFO: Got endpoints: latency-svc-qn5qp [1.012621549s] May 6 12:09:43.346: INFO: Created: latency-svc-mnpmt May 6 12:09:43.438: INFO: Got endpoints: latency-svc-mnpmt [1.137796681s] May 6 12:09:43.440: INFO: Created: latency-svc-wbb6q May 6 12:09:43.450: INFO: Got endpoints: latency-svc-wbb6q [155.358099ms] May 6 12:09:43.473: INFO: Created: latency-svc-scsdp May 6 12:09:43.486: INFO: Got endpoints: latency-svc-scsdp [1.137578056s] May 6 12:09:43.528: INFO: Created: latency-svc-czlr9 May 6 12:09:43.606: INFO: Got endpoints: latency-svc-czlr9 [1.148324986s] May 6 12:09:43.610: INFO: Created: latency-svc-d9rdl May 6 12:09:43.644: INFO: Got endpoints: latency-svc-d9rdl [1.113851116s] May 6 12:09:43.834: INFO: Created: latency-svc-fdqmb May 6 12:09:43.837: INFO: Got endpoints: latency-svc-fdqmb [1.212555393s] May 6 12:09:43.872: INFO: Created: latency-svc-t7qjc May 6 12:09:43.891: INFO: Got endpoints: latency-svc-t7qjc [1.212300194s] May 6 12:09:43.914: INFO: Created: latency-svc-pzr8b May 6 12:09:43.927: INFO: Got endpoints: latency-svc-pzr8b [1.135316082s] May 6 12:09:43.989: INFO: Created: latency-svc-25tm5 May 6 12:09:43.992: INFO: Got endpoints: latency-svc-25tm5 [1.180680733s] May 6 12:09:44.028: INFO: Created: latency-svc-f7xcr May 6 12:09:44.041: INFO: Got endpoints: latency-svc-f7xcr [1.159218443s] May 6 12:09:44.064: INFO: Created: latency-svc-vs874 May 6 12:09:44.088: INFO: Got endpoints: latency-svc-vs874 [1.156114414s] May 6 12:09:44.162: INFO: Created: latency-svc-v2wn2 May 6 12:09:44.205: INFO: Got endpoints: latency-svc-v2wn2 [1.189120429s] May 6 12:09:44.654: INFO: Created: latency-svc-kkjb9 May 6 12:09:44.893: INFO: Got endpoints: latency-svc-kkjb9 [1.780347855s] May 6 12:09:44.951: INFO: Created: latency-svc-5nphs May 6 12:09:45.043: INFO: Got endpoints: latency-svc-5nphs [1.882750792s] May 6 12:09:45.064: INFO: Created: latency-svc-dp2z4 May 6 12:09:45.098: INFO: Got endpoints: latency-svc-dp2z4 [1.82725518s] May 6 12:09:45.306: INFO: Created: latency-svc-zctm8 May 6 12:09:45.319: INFO: Got endpoints: latency-svc-zctm8 [1.880548967s] May 6 12:09:45.396: INFO: Created: latency-svc-lphg4 May 6 12:09:45.492: INFO: Got endpoints: latency-svc-lphg4 [2.04127694s] May 6 12:09:45.501: INFO: Created: latency-svc-hgkh6 May 6 12:09:45.541: INFO: Got endpoints: latency-svc-hgkh6 [2.055217038s] May 6 12:09:45.659: INFO: Created: latency-svc-qwkmv May 6 12:09:45.662: INFO: Got endpoints: latency-svc-qwkmv [2.055856866s] May 6 12:09:45.747: INFO: Created: latency-svc-mhtb2 May 6 12:09:45.757: INFO: Got endpoints: latency-svc-mhtb2 [2.113384813s] May 6 12:09:45.811: INFO: Created: latency-svc-7p8w4 May 6 12:09:45.818: INFO: Got endpoints: latency-svc-7p8w4 [1.981160018s] May 6 12:09:45.847: INFO: Created: latency-svc-wkcw4 May 6 12:09:45.854: INFO: Got endpoints: latency-svc-wkcw4 [1.962582862s] May 6 12:09:45.887: INFO: Created: latency-svc-spqg9 May 6 12:09:45.953: INFO: Got endpoints: latency-svc-spqg9 [2.026143537s] May 6 12:09:45.955: INFO: Created: latency-svc-mhlrz May 6 12:09:45.968: INFO: Got endpoints: latency-svc-mhlrz [1.976017545s] May 6 12:09:46.015: INFO: Created: latency-svc-78jvf May 6 12:09:46.028: INFO: Got endpoints: latency-svc-78jvf [1.986938273s] May 6 12:09:46.051: INFO: Created: latency-svc-m65gm May 6 12:09:46.104: INFO: Got endpoints: latency-svc-m65gm [2.016069132s] May 6 12:09:46.107: INFO: Created: latency-svc-rlnr5 May 6 12:09:46.120: INFO: Got endpoints: latency-svc-rlnr5 [1.914450729s] May 6 12:09:46.162: INFO: Created: latency-svc-zg6jm May 6 12:09:46.192: INFO: Got endpoints: latency-svc-zg6jm [1.298879842s] May 6 12:09:46.251: INFO: Created: latency-svc-xhvp5 May 6 12:09:46.264: INFO: Got endpoints: latency-svc-xhvp5 [1.221291729s] May 6 12:09:46.302: INFO: Created: latency-svc-749xl May 6 12:09:46.330: INFO: Got endpoints: latency-svc-749xl [1.232763726s] May 6 12:09:46.396: INFO: Created: latency-svc-fpfz2 May 6 12:09:46.402: INFO: Got endpoints: latency-svc-fpfz2 [1.083732133s] May 6 12:09:46.431: INFO: Created: latency-svc-lqzd7 May 6 12:09:46.462: INFO: Got endpoints: latency-svc-lqzd7 [970.548609ms] May 6 12:09:46.485: INFO: Created: latency-svc-wtvrt May 6 12:09:46.552: INFO: Got endpoints: latency-svc-wtvrt [1.010715219s] May 6 12:09:46.558: INFO: Created: latency-svc-7xl7c May 6 12:09:46.619: INFO: Got endpoints: latency-svc-7xl7c [956.393395ms] May 6 12:09:46.619: INFO: Created: latency-svc-cmlgd May 6 12:09:46.632: INFO: Got endpoints: latency-svc-cmlgd [875.108488ms] May 6 12:09:46.720: INFO: Created: latency-svc-66495 May 6 12:09:46.722: INFO: Got endpoints: latency-svc-66495 [903.488401ms] May 6 12:09:46.767: INFO: Created: latency-svc-4v887 May 6 12:09:46.791: INFO: Got endpoints: latency-svc-4v887 [937.204853ms] May 6 12:09:46.864: INFO: Created: latency-svc-vwjmm May 6 12:09:46.866: INFO: Got endpoints: latency-svc-vwjmm [913.546323ms] May 6 12:09:46.890: INFO: Created: latency-svc-nfcf2 May 6 12:09:46.903: INFO: Got endpoints: latency-svc-nfcf2 [935.143177ms] May 6 12:09:46.921: INFO: Created: latency-svc-8xq5j May 6 12:09:46.962: INFO: Got endpoints: latency-svc-8xq5j [933.990264ms] May 6 12:09:46.963: INFO: Created: latency-svc-v6h6v May 6 12:09:47.019: INFO: Got endpoints: latency-svc-v6h6v [914.901563ms] May 6 12:09:47.050: INFO: Created: latency-svc-87kjm May 6 12:09:47.071: INFO: Got endpoints: latency-svc-87kjm [951.743051ms] May 6 12:09:47.089: INFO: Created: latency-svc-j6kmc May 6 12:09:47.112: INFO: Got endpoints: latency-svc-j6kmc [920.537268ms] May 6 12:09:47.169: INFO: Created: latency-svc-9knhl May 6 12:09:47.180: INFO: Got endpoints: latency-svc-9knhl [916.087097ms] May 6 12:09:47.212: INFO: Created: latency-svc-mcvpj May 6 12:09:47.241: INFO: Got endpoints: latency-svc-mcvpj [910.431481ms] May 6 12:09:47.259: INFO: Created: latency-svc-jnlwv May 6 12:09:47.295: INFO: Got endpoints: latency-svc-jnlwv [892.814858ms] May 6 12:09:47.304: INFO: Created: latency-svc-5w7zc May 6 12:09:47.320: INFO: Got endpoints: latency-svc-5w7zc [857.670142ms] May 6 12:09:47.341: INFO: Created: latency-svc-xx6l4 May 6 12:09:47.356: INFO: Got endpoints: latency-svc-xx6l4 [804.505847ms] May 6 12:09:47.377: INFO: Created: latency-svc-fkq6v May 6 12:09:47.393: INFO: Got endpoints: latency-svc-fkq6v [773.8565ms] May 6 12:09:47.451: INFO: Created: latency-svc-f5b7q May 6 12:09:47.458: INFO: Got endpoints: latency-svc-f5b7q [826.045756ms] May 6 12:09:47.484: INFO: Created: latency-svc-2s5qq May 6 12:09:47.495: INFO: Got endpoints: latency-svc-2s5qq [773.324416ms] May 6 12:09:47.521: INFO: Created: latency-svc-v8bdl May 6 12:09:47.531: INFO: Got endpoints: latency-svc-v8bdl [740.013345ms] May 6 12:09:47.606: INFO: Created: latency-svc-x4smf May 6 12:09:47.608: INFO: Got endpoints: latency-svc-x4smf [741.768448ms] May 6 12:09:47.649: INFO: Created: latency-svc-cqfgz May 6 12:09:47.670: INFO: Got endpoints: latency-svc-cqfgz [766.496951ms] May 6 12:09:47.761: INFO: Created: latency-svc-sl6hl May 6 12:09:47.790: INFO: Got endpoints: latency-svc-sl6hl [827.459299ms] May 6 12:09:47.815: INFO: Created: latency-svc-jglg9 May 6 12:09:47.832: INFO: Got endpoints: latency-svc-jglg9 [813.261092ms] May 6 12:09:47.907: INFO: Created: latency-svc-q8g2t May 6 12:09:47.910: INFO: Got endpoints: latency-svc-q8g2t [838.253617ms] May 6 12:09:47.962: INFO: Created: latency-svc-6ptjm May 6 12:09:47.988: INFO: Got endpoints: latency-svc-6ptjm [875.874354ms] May 6 12:09:48.050: INFO: Created: latency-svc-dnq2g May 6 12:09:48.056: INFO: Got endpoints: latency-svc-dnq2g [875.824748ms] May 6 12:09:48.080: INFO: Created: latency-svc-k6phv May 6 12:09:48.109: INFO: Got endpoints: latency-svc-k6phv [868.193314ms] May 6 12:09:48.129: INFO: Created: latency-svc-h4cks May 6 12:09:48.139: INFO: Got endpoints: latency-svc-h4cks [843.758897ms] May 6 12:09:48.251: INFO: Created: latency-svc-qr7mn May 6 12:09:48.266: INFO: Got endpoints: latency-svc-qr7mn [945.605347ms] May 6 12:09:48.266: INFO: Latencies: [55.554223ms 155.358099ms 284.060855ms 302.079931ms 660.671257ms 740.013345ms 741.768448ms 766.496951ms 773.324416ms 773.8565ms 804.505847ms 813.261092ms 826.045756ms 827.459299ms 829.055864ms 838.253617ms 839.845938ms 843.758897ms 846.981856ms 849.599503ms 852.312247ms 857.670142ms 867.080559ms 868.193314ms 873.873185ms 875.108488ms 875.824748ms 875.874354ms 877.362279ms 886.490493ms 891.23195ms 892.814858ms 896.550062ms 903.15783ms 903.488401ms 907.571451ms 908.537023ms 910.3075ms 910.347397ms 910.431481ms 913.546323ms 914.901563ms 916.087097ms 920.537268ms 928.315704ms 933.990264ms 935.143177ms 937.204853ms 942.544379ms 945.167706ms 945.605347ms 950.899389ms 951.703225ms 951.743051ms 952.262927ms 953.856151ms 956.018616ms 956.393395ms 961.51034ms 962.665108ms 963.105325ms 963.793528ms 964.565987ms 967.518762ms 970.548609ms 973.303528ms 978.359303ms 980.883416ms 994.270085ms 994.627306ms 996.60516ms 997.440026ms 998.712394ms 1.004508984s 1.005229336s 1.007313463s 1.009086134s 1.010715219s 1.011602349s 1.012621549s 1.012926139s 1.017161663s 1.017441299s 1.017563265s 1.022487541s 1.023815703s 1.026139113s 1.027968014s 1.029189897s 1.029633146s 1.030524912s 1.035431373s 1.036629817s 1.040826353s 1.043927068s 1.048947097s 1.05128925s 1.059073645s 1.059211598s 1.059606726s 1.060302053s 1.069592881s 1.069883033s 1.070989016s 1.078769642s 1.078874999s 1.083732133s 1.104455189s 1.112980979s 1.113851116s 1.11974542s 1.126644802s 1.126947832s 1.132105873s 1.135316082s 1.137578056s 1.137796681s 1.148324986s 1.149763687s 1.155289312s 1.156114414s 1.159218443s 1.180680733s 1.186699456s 1.189120429s 1.208810738s 1.212300194s 1.212555393s 1.221291729s 1.230061239s 1.232763726s 1.266434799s 1.298879842s 1.315753221s 1.335439102s 1.376019993s 1.392215706s 1.393850658s 1.394483976s 1.397255844s 1.400683513s 1.4010885s 1.418028428s 1.428561397s 1.452101766s 1.563826227s 1.616054068s 1.628491498s 1.638478348s 1.69940708s 1.716890102s 1.77074137s 1.780347855s 1.787577715s 1.797989791s 1.802807877s 1.82725518s 1.849628394s 1.853873847s 1.861246118s 1.86489067s 1.880548967s 1.882750792s 1.886760416s 1.900167626s 1.900731834s 1.906700283s 1.914450729s 1.921033908s 1.921542652s 1.932190011s 1.937508345s 1.937791579s 1.950941986s 1.962582862s 1.974834966s 1.976017545s 1.981160018s 1.984174253s 1.984755402s 1.986938273s 2.001720021s 2.016069132s 2.017733782s 2.026143537s 2.027387566s 2.028913375s 2.038418357s 2.04127694s 2.04780197s 2.055217038s 2.055856866s 2.093690134s 2.113384813s 2.118268393s 2.219200321s 2.27443315s 2.335109535s 2.338586449s 2.373317299s] May 6 12:09:48.266: INFO: 50 %ile: 1.060302053s May 6 12:09:48.266: INFO: 90 %ile: 1.986938273s May 6 12:09:48.266: INFO: 99 %ile: 2.338586449s May 6 12:09:48.266: INFO: Total sample count: 200 [AfterEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 12:09:48.266: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-svc-latency-hzgfg" for this suite. May 6 12:10:12.353: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 12:10:12.420: INFO: namespace: e2e-tests-svc-latency-hzgfg, resource: bindings, ignored listing per whitelist May 6 12:10:12.449: INFO: namespace e2e-tests-svc-latency-hzgfg deletion completed in 24.11209951s • [SLOW TEST:45.802 seconds] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 12:10:12.449: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74 STEP: Creating service test in namespace e2e-tests-statefulset-vl8xf [It] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a new StaefulSet May 6 12:10:12.667: INFO: Found 0 stateful pods, waiting for 3 May 6 12:10:22.683: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 6 12:10:22.683: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 6 12:10:22.683: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false May 6 12:10:32.672: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 6 12:10:32.672: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 6 12:10:32.672: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Updating stateful set template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine May 6 12:10:32.719: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Not applying an update when the partition is greater than the number of replicas STEP: Performing a canary update May 6 12:10:42.778: INFO: Updating stateful set ss2 May 6 12:10:42.785: INFO: Waiting for Pod e2e-tests-statefulset-vl8xf/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c STEP: Restoring Pods to the correct revision when they are deleted May 6 12:10:53.261: INFO: Found 2 stateful pods, waiting for 3 May 6 12:11:03.266: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 6 12:11:03.266: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 6 12:11:03.266: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Performing a phased rolling update May 6 12:11:03.291: INFO: Updating stateful set ss2 May 6 12:11:03.301: INFO: Waiting for Pod e2e-tests-statefulset-vl8xf/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c May 6 12:11:13.325: INFO: Updating stateful set ss2 May 6 12:11:13.335: INFO: Waiting for StatefulSet e2e-tests-statefulset-vl8xf/ss2 to complete update May 6 12:11:13.335: INFO: Waiting for Pod e2e-tests-statefulset-vl8xf/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c May 6 12:11:23.457: INFO: Waiting for StatefulSet e2e-tests-statefulset-vl8xf/ss2 to complete update [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85 May 6 12:11:33.344: INFO: Deleting all statefulset in ns e2e-tests-statefulset-vl8xf May 6 12:11:33.348: INFO: Scaling statefulset ss2 to 0 May 6 12:11:53.365: INFO: Waiting for statefulset status.replicas updated to 0 May 6 12:11:53.369: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 12:11:53.382: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-statefulset-vl8xf" for this suite. May 6 12:11:59.410: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 12:11:59.479: INFO: namespace: e2e-tests-statefulset-vl8xf, resource: bindings, ignored listing per whitelist May 6 12:11:59.488: INFO: namespace e2e-tests-statefulset-vl8xf deletion completed in 6.102833729s • [SLOW TEST:107.039 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 12:11:59.489: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79 May 6 12:11:59.572: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 6 12:11:59.587: INFO: Waiting for terminating namespaces to be deleted... May 6 12:11:59.590: INFO: Logging pods the kubelet thinks is on node hunter-worker before test May 6 12:11:59.595: INFO: kindnet-54h7m from kube-system started at 2020-03-15 18:23:12 +0000 UTC (1 container statuses recorded) May 6 12:11:59.595: INFO: Container kindnet-cni ready: true, restart count 0 May 6 12:11:59.595: INFO: coredns-54ff9cd656-4h7lb from kube-system started at 2020-03-15 18:23:32 +0000 UTC (1 container statuses recorded) May 6 12:11:59.595: INFO: Container coredns ready: true, restart count 0 May 6 12:11:59.595: INFO: kube-proxy-szbng from kube-system started at 2020-03-15 18:23:11 +0000 UTC (1 container statuses recorded) May 6 12:11:59.596: INFO: Container kube-proxy ready: true, restart count 0 May 6 12:11:59.596: INFO: Logging pods the kubelet thinks is on node hunter-worker2 before test May 6 12:11:59.600: INFO: kindnet-mtqrs from kube-system started at 2020-03-15 18:23:12 +0000 UTC (1 container statuses recorded) May 6 12:11:59.600: INFO: Container kindnet-cni ready: true, restart count 0 May 6 12:11:59.600: INFO: coredns-54ff9cd656-8vrkk from kube-system started at 2020-03-15 18:23:32 +0000 UTC (1 container statuses recorded) May 6 12:11:59.600: INFO: Container coredns ready: true, restart count 0 May 6 12:11:59.600: INFO: kube-proxy-s52ll from kube-system started at 2020-03-15 18:23:12 +0000 UTC (1 container statuses recorded) May 6 12:11:59.600: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.160c6f4714a4fa0f], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match node selector.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 12:12:00.639: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-sched-pred-k7z9n" for this suite. May 6 12:12:08.679: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 12:12:08.716: INFO: namespace: e2e-tests-sched-pred-k7z9n, resource: bindings, ignored listing per whitelist May 6 12:12:08.752: INFO: namespace e2e-tests-sched-pred-k7z9n deletion completed in 8.108590136s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70 • [SLOW TEST:9.263 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22 validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 12:12:08.753: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-d8gj2 A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.e2e-tests-dns-d8gj2;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-d8gj2 A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.e2e-tests-dns-d8gj2;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-d8gj2.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.e2e-tests-dns-d8gj2.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-d8gj2.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.e2e-tests-dns-d8gj2.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-d8gj2.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-d8gj2.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-d8gj2.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.e2e-tests-dns-d8gj2.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-d8gj2.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.e2e-tests-dns-d8gj2.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-d8gj2.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.e2e-tests-dns-d8gj2.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-d8gj2.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 171.182.104.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.104.182.171_udp@PTR;check="$$(dig +tcp +noall +answer +search 171.182.104.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.104.182.171_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-d8gj2 A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.e2e-tests-dns-d8gj2;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-d8gj2 A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.e2e-tests-dns-d8gj2;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-d8gj2.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.e2e-tests-dns-d8gj2.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-d8gj2.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.e2e-tests-dns-d8gj2.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-d8gj2.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-d8gj2.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-d8gj2.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-d8gj2.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-d8gj2.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.e2e-tests-dns-d8gj2.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-d8gj2.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.e2e-tests-dns-d8gj2.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-d8gj2.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 171.182.104.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.104.182.171_udp@PTR;check="$$(dig +tcp +noall +answer +search 171.182.104.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.104.182.171_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 6 12:12:18.968: INFO: Unable to read wheezy_udp@dns-test-service from pod e2e-tests-dns-d8gj2/dns-test-cf00b363-8f92-11ea-b5fe-0242ac110017: the server could not find the requested resource (get pods dns-test-cf00b363-8f92-11ea-b5fe-0242ac110017) May 6 12:12:18.988: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-d8gj2.svc from pod e2e-tests-dns-d8gj2/dns-test-cf00b363-8f92-11ea-b5fe-0242ac110017: the server could not find the requested resource (get pods dns-test-cf00b363-8f92-11ea-b5fe-0242ac110017) May 6 12:12:19.027: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-d8gj2/dns-test-cf00b363-8f92-11ea-b5fe-0242ac110017: the server could not find the requested resource (get pods dns-test-cf00b363-8f92-11ea-b5fe-0242ac110017) May 6 12:12:19.030: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-d8gj2/dns-test-cf00b363-8f92-11ea-b5fe-0242ac110017: the server could not find the requested resource (get pods dns-test-cf00b363-8f92-11ea-b5fe-0242ac110017) May 6 12:12:19.032: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-d8gj2 from pod e2e-tests-dns-d8gj2/dns-test-cf00b363-8f92-11ea-b5fe-0242ac110017: the server could not find the requested resource (get pods dns-test-cf00b363-8f92-11ea-b5fe-0242ac110017) May 6 12:12:19.035: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-d8gj2 from pod e2e-tests-dns-d8gj2/dns-test-cf00b363-8f92-11ea-b5fe-0242ac110017: the server could not find the requested resource (get pods dns-test-cf00b363-8f92-11ea-b5fe-0242ac110017) May 6 12:12:19.038: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-d8gj2.svc from pod e2e-tests-dns-d8gj2/dns-test-cf00b363-8f92-11ea-b5fe-0242ac110017: the server could not find the requested resource (get pods dns-test-cf00b363-8f92-11ea-b5fe-0242ac110017) May 6 12:12:19.040: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-d8gj2.svc from pod e2e-tests-dns-d8gj2/dns-test-cf00b363-8f92-11ea-b5fe-0242ac110017: the server could not find the requested resource (get pods dns-test-cf00b363-8f92-11ea-b5fe-0242ac110017) May 6 12:12:19.044: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-d8gj2.svc from pod e2e-tests-dns-d8gj2/dns-test-cf00b363-8f92-11ea-b5fe-0242ac110017: the server could not find the requested resource (get pods dns-test-cf00b363-8f92-11ea-b5fe-0242ac110017) May 6 12:12:19.047: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-d8gj2.svc from pod e2e-tests-dns-d8gj2/dns-test-cf00b363-8f92-11ea-b5fe-0242ac110017: the server could not find the requested resource (get pods dns-test-cf00b363-8f92-11ea-b5fe-0242ac110017) May 6 12:12:19.063: INFO: Lookups using e2e-tests-dns-d8gj2/dns-test-cf00b363-8f92-11ea-b5fe-0242ac110017 failed for: [wheezy_udp@dns-test-service wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-d8gj2.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-d8gj2 jessie_tcp@dns-test-service.e2e-tests-dns-d8gj2 jessie_udp@dns-test-service.e2e-tests-dns-d8gj2.svc jessie_tcp@dns-test-service.e2e-tests-dns-d8gj2.svc jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-d8gj2.svc jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-d8gj2.svc] May 6 12:12:24.068: INFO: Unable to read wheezy_udp@dns-test-service from pod e2e-tests-dns-d8gj2/dns-test-cf00b363-8f92-11ea-b5fe-0242ac110017: the server could not find the requested resource (get pods dns-test-cf00b363-8f92-11ea-b5fe-0242ac110017) May 6 12:12:24.106: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-d8gj2.svc from pod e2e-tests-dns-d8gj2/dns-test-cf00b363-8f92-11ea-b5fe-0242ac110017: the server could not find the requested resource (get pods dns-test-cf00b363-8f92-11ea-b5fe-0242ac110017) May 6 12:12:24.161: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-d8gj2/dns-test-cf00b363-8f92-11ea-b5fe-0242ac110017: the server could not find the requested resource (get pods dns-test-cf00b363-8f92-11ea-b5fe-0242ac110017) May 6 12:12:24.164: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-d8gj2/dns-test-cf00b363-8f92-11ea-b5fe-0242ac110017: the server could not find the requested resource (get pods dns-test-cf00b363-8f92-11ea-b5fe-0242ac110017) May 6 12:12:24.167: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-d8gj2 from pod e2e-tests-dns-d8gj2/dns-test-cf00b363-8f92-11ea-b5fe-0242ac110017: the server could not find the requested resource (get pods dns-test-cf00b363-8f92-11ea-b5fe-0242ac110017) May 6 12:12:24.170: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-d8gj2 from pod e2e-tests-dns-d8gj2/dns-test-cf00b363-8f92-11ea-b5fe-0242ac110017: the server could not find the requested resource (get pods dns-test-cf00b363-8f92-11ea-b5fe-0242ac110017) May 6 12:12:24.173: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-d8gj2.svc from pod e2e-tests-dns-d8gj2/dns-test-cf00b363-8f92-11ea-b5fe-0242ac110017: the server could not find the requested resource (get pods dns-test-cf00b363-8f92-11ea-b5fe-0242ac110017) May 6 12:12:24.176: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-d8gj2.svc from pod e2e-tests-dns-d8gj2/dns-test-cf00b363-8f92-11ea-b5fe-0242ac110017: the server could not find the requested resource (get pods dns-test-cf00b363-8f92-11ea-b5fe-0242ac110017) May 6 12:12:24.179: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-d8gj2.svc from pod e2e-tests-dns-d8gj2/dns-test-cf00b363-8f92-11ea-b5fe-0242ac110017: the server could not find the requested resource (get pods dns-test-cf00b363-8f92-11ea-b5fe-0242ac110017) May 6 12:12:24.182: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-d8gj2.svc from pod e2e-tests-dns-d8gj2/dns-test-cf00b363-8f92-11ea-b5fe-0242ac110017: the server could not find the requested resource (get pods dns-test-cf00b363-8f92-11ea-b5fe-0242ac110017) May 6 12:12:24.202: INFO: Lookups using e2e-tests-dns-d8gj2/dns-test-cf00b363-8f92-11ea-b5fe-0242ac110017 failed for: [wheezy_udp@dns-test-service wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-d8gj2.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-d8gj2 jessie_tcp@dns-test-service.e2e-tests-dns-d8gj2 jessie_udp@dns-test-service.e2e-tests-dns-d8gj2.svc jessie_tcp@dns-test-service.e2e-tests-dns-d8gj2.svc jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-d8gj2.svc jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-d8gj2.svc] May 6 12:12:29.068: INFO: Unable to read wheezy_udp@dns-test-service from pod e2e-tests-dns-d8gj2/dns-test-cf00b363-8f92-11ea-b5fe-0242ac110017: the server could not find the requested resource (get pods dns-test-cf00b363-8f92-11ea-b5fe-0242ac110017) May 6 12:12:29.087: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-d8gj2.svc from pod e2e-tests-dns-d8gj2/dns-test-cf00b363-8f92-11ea-b5fe-0242ac110017: the server could not find the requested resource (get pods dns-test-cf00b363-8f92-11ea-b5fe-0242ac110017) May 6 12:12:29.111: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-d8gj2/dns-test-cf00b363-8f92-11ea-b5fe-0242ac110017: the server could not find the requested resource (get pods dns-test-cf00b363-8f92-11ea-b5fe-0242ac110017) May 6 12:12:29.114: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-d8gj2/dns-test-cf00b363-8f92-11ea-b5fe-0242ac110017: the server could not find the requested resource (get pods dns-test-cf00b363-8f92-11ea-b5fe-0242ac110017) May 6 12:12:29.116: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-d8gj2 from pod e2e-tests-dns-d8gj2/dns-test-cf00b363-8f92-11ea-b5fe-0242ac110017: the server could not find the requested resource (get pods dns-test-cf00b363-8f92-11ea-b5fe-0242ac110017) May 6 12:12:29.119: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-d8gj2 from pod e2e-tests-dns-d8gj2/dns-test-cf00b363-8f92-11ea-b5fe-0242ac110017: the server could not find the requested resource (get pods dns-test-cf00b363-8f92-11ea-b5fe-0242ac110017) May 6 12:12:29.122: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-d8gj2.svc from pod e2e-tests-dns-d8gj2/dns-test-cf00b363-8f92-11ea-b5fe-0242ac110017: the server could not find the requested resource (get pods dns-test-cf00b363-8f92-11ea-b5fe-0242ac110017) May 6 12:12:29.125: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-d8gj2.svc from pod e2e-tests-dns-d8gj2/dns-test-cf00b363-8f92-11ea-b5fe-0242ac110017: the server could not find the requested resource (get pods dns-test-cf00b363-8f92-11ea-b5fe-0242ac110017) May 6 12:12:29.128: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-d8gj2.svc from pod e2e-tests-dns-d8gj2/dns-test-cf00b363-8f92-11ea-b5fe-0242ac110017: the server could not find the requested resource (get pods dns-test-cf00b363-8f92-11ea-b5fe-0242ac110017) May 6 12:12:29.131: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-d8gj2.svc from pod e2e-tests-dns-d8gj2/dns-test-cf00b363-8f92-11ea-b5fe-0242ac110017: the server could not find the requested resource (get pods dns-test-cf00b363-8f92-11ea-b5fe-0242ac110017) May 6 12:12:29.148: INFO: Lookups using e2e-tests-dns-d8gj2/dns-test-cf00b363-8f92-11ea-b5fe-0242ac110017 failed for: [wheezy_udp@dns-test-service wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-d8gj2.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-d8gj2 jessie_tcp@dns-test-service.e2e-tests-dns-d8gj2 jessie_udp@dns-test-service.e2e-tests-dns-d8gj2.svc jessie_tcp@dns-test-service.e2e-tests-dns-d8gj2.svc jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-d8gj2.svc jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-d8gj2.svc] May 6 12:12:34.067: INFO: Unable to read wheezy_udp@dns-test-service from pod e2e-tests-dns-d8gj2/dns-test-cf00b363-8f92-11ea-b5fe-0242ac110017: the server could not find the requested resource (get pods dns-test-cf00b363-8f92-11ea-b5fe-0242ac110017) May 6 12:12:34.087: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-d8gj2.svc from pod e2e-tests-dns-d8gj2/dns-test-cf00b363-8f92-11ea-b5fe-0242ac110017: the server could not find the requested resource (get pods dns-test-cf00b363-8f92-11ea-b5fe-0242ac110017) May 6 12:12:34.154: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-d8gj2/dns-test-cf00b363-8f92-11ea-b5fe-0242ac110017: the server could not find the requested resource (get pods dns-test-cf00b363-8f92-11ea-b5fe-0242ac110017) May 6 12:12:34.156: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-d8gj2/dns-test-cf00b363-8f92-11ea-b5fe-0242ac110017: the server could not find the requested resource (get pods dns-test-cf00b363-8f92-11ea-b5fe-0242ac110017) May 6 12:12:34.159: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-d8gj2 from pod e2e-tests-dns-d8gj2/dns-test-cf00b363-8f92-11ea-b5fe-0242ac110017: the server could not find the requested resource (get pods dns-test-cf00b363-8f92-11ea-b5fe-0242ac110017) May 6 12:12:34.162: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-d8gj2 from pod e2e-tests-dns-d8gj2/dns-test-cf00b363-8f92-11ea-b5fe-0242ac110017: the server could not find the requested resource (get pods dns-test-cf00b363-8f92-11ea-b5fe-0242ac110017) May 6 12:12:34.164: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-d8gj2.svc from pod e2e-tests-dns-d8gj2/dns-test-cf00b363-8f92-11ea-b5fe-0242ac110017: the server could not find the requested resource (get pods dns-test-cf00b363-8f92-11ea-b5fe-0242ac110017) May 6 12:12:34.166: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-d8gj2.svc from pod e2e-tests-dns-d8gj2/dns-test-cf00b363-8f92-11ea-b5fe-0242ac110017: the server could not find the requested resource (get pods dns-test-cf00b363-8f92-11ea-b5fe-0242ac110017) May 6 12:12:34.169: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-d8gj2.svc from pod e2e-tests-dns-d8gj2/dns-test-cf00b363-8f92-11ea-b5fe-0242ac110017: the server could not find the requested resource (get pods dns-test-cf00b363-8f92-11ea-b5fe-0242ac110017) May 6 12:12:34.171: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-d8gj2.svc from pod e2e-tests-dns-d8gj2/dns-test-cf00b363-8f92-11ea-b5fe-0242ac110017: the server could not find the requested resource (get pods dns-test-cf00b363-8f92-11ea-b5fe-0242ac110017) May 6 12:12:34.195: INFO: Lookups using e2e-tests-dns-d8gj2/dns-test-cf00b363-8f92-11ea-b5fe-0242ac110017 failed for: [wheezy_udp@dns-test-service wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-d8gj2.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-d8gj2 jessie_tcp@dns-test-service.e2e-tests-dns-d8gj2 jessie_udp@dns-test-service.e2e-tests-dns-d8gj2.svc jessie_tcp@dns-test-service.e2e-tests-dns-d8gj2.svc jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-d8gj2.svc jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-d8gj2.svc] May 6 12:12:39.068: INFO: Unable to read wheezy_udp@dns-test-service from pod e2e-tests-dns-d8gj2/dns-test-cf00b363-8f92-11ea-b5fe-0242ac110017: the server could not find the requested resource (get pods dns-test-cf00b363-8f92-11ea-b5fe-0242ac110017) May 6 12:12:39.091: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-d8gj2.svc from pod e2e-tests-dns-d8gj2/dns-test-cf00b363-8f92-11ea-b5fe-0242ac110017: the server could not find the requested resource (get pods dns-test-cf00b363-8f92-11ea-b5fe-0242ac110017) May 6 12:12:39.178: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-d8gj2/dns-test-cf00b363-8f92-11ea-b5fe-0242ac110017: the server could not find the requested resource (get pods dns-test-cf00b363-8f92-11ea-b5fe-0242ac110017) May 6 12:12:39.180: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-d8gj2/dns-test-cf00b363-8f92-11ea-b5fe-0242ac110017: the server could not find the requested resource (get pods dns-test-cf00b363-8f92-11ea-b5fe-0242ac110017) May 6 12:12:39.182: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-d8gj2 from pod e2e-tests-dns-d8gj2/dns-test-cf00b363-8f92-11ea-b5fe-0242ac110017: the server could not find the requested resource (get pods dns-test-cf00b363-8f92-11ea-b5fe-0242ac110017) May 6 12:12:39.184: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-d8gj2 from pod e2e-tests-dns-d8gj2/dns-test-cf00b363-8f92-11ea-b5fe-0242ac110017: the server could not find the requested resource (get pods dns-test-cf00b363-8f92-11ea-b5fe-0242ac110017) May 6 12:12:39.187: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-d8gj2.svc from pod e2e-tests-dns-d8gj2/dns-test-cf00b363-8f92-11ea-b5fe-0242ac110017: the server could not find the requested resource (get pods dns-test-cf00b363-8f92-11ea-b5fe-0242ac110017) May 6 12:12:39.189: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-d8gj2.svc from pod e2e-tests-dns-d8gj2/dns-test-cf00b363-8f92-11ea-b5fe-0242ac110017: the server could not find the requested resource (get pods dns-test-cf00b363-8f92-11ea-b5fe-0242ac110017) May 6 12:12:39.191: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-d8gj2.svc from pod e2e-tests-dns-d8gj2/dns-test-cf00b363-8f92-11ea-b5fe-0242ac110017: the server could not find the requested resource (get pods dns-test-cf00b363-8f92-11ea-b5fe-0242ac110017) May 6 12:12:39.194: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-d8gj2.svc from pod e2e-tests-dns-d8gj2/dns-test-cf00b363-8f92-11ea-b5fe-0242ac110017: the server could not find the requested resource (get pods dns-test-cf00b363-8f92-11ea-b5fe-0242ac110017) May 6 12:12:39.220: INFO: Lookups using e2e-tests-dns-d8gj2/dns-test-cf00b363-8f92-11ea-b5fe-0242ac110017 failed for: [wheezy_udp@dns-test-service wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-d8gj2.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-d8gj2 jessie_tcp@dns-test-service.e2e-tests-dns-d8gj2 jessie_udp@dns-test-service.e2e-tests-dns-d8gj2.svc jessie_tcp@dns-test-service.e2e-tests-dns-d8gj2.svc jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-d8gj2.svc jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-d8gj2.svc] May 6 12:12:44.069: INFO: Unable to read wheezy_udp@dns-test-service from pod e2e-tests-dns-d8gj2/dns-test-cf00b363-8f92-11ea-b5fe-0242ac110017: the server could not find the requested resource (get pods dns-test-cf00b363-8f92-11ea-b5fe-0242ac110017) May 6 12:12:44.093: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-d8gj2.svc from pod e2e-tests-dns-d8gj2/dns-test-cf00b363-8f92-11ea-b5fe-0242ac110017: the server could not find the requested resource (get pods dns-test-cf00b363-8f92-11ea-b5fe-0242ac110017) May 6 12:12:44.136: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-d8gj2/dns-test-cf00b363-8f92-11ea-b5fe-0242ac110017: the server could not find the requested resource (get pods dns-test-cf00b363-8f92-11ea-b5fe-0242ac110017) May 6 12:12:44.139: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-d8gj2/dns-test-cf00b363-8f92-11ea-b5fe-0242ac110017: the server could not find the requested resource (get pods dns-test-cf00b363-8f92-11ea-b5fe-0242ac110017) May 6 12:12:44.142: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-d8gj2 from pod e2e-tests-dns-d8gj2/dns-test-cf00b363-8f92-11ea-b5fe-0242ac110017: the server could not find the requested resource (get pods dns-test-cf00b363-8f92-11ea-b5fe-0242ac110017) May 6 12:12:44.145: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-d8gj2 from pod e2e-tests-dns-d8gj2/dns-test-cf00b363-8f92-11ea-b5fe-0242ac110017: the server could not find the requested resource (get pods dns-test-cf00b363-8f92-11ea-b5fe-0242ac110017) May 6 12:12:44.148: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-d8gj2.svc from pod e2e-tests-dns-d8gj2/dns-test-cf00b363-8f92-11ea-b5fe-0242ac110017: the server could not find the requested resource (get pods dns-test-cf00b363-8f92-11ea-b5fe-0242ac110017) May 6 12:12:44.151: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-d8gj2.svc from pod e2e-tests-dns-d8gj2/dns-test-cf00b363-8f92-11ea-b5fe-0242ac110017: the server could not find the requested resource (get pods dns-test-cf00b363-8f92-11ea-b5fe-0242ac110017) May 6 12:12:44.155: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-d8gj2.svc from pod e2e-tests-dns-d8gj2/dns-test-cf00b363-8f92-11ea-b5fe-0242ac110017: the server could not find the requested resource (get pods dns-test-cf00b363-8f92-11ea-b5fe-0242ac110017) May 6 12:12:44.157: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-d8gj2.svc from pod e2e-tests-dns-d8gj2/dns-test-cf00b363-8f92-11ea-b5fe-0242ac110017: the server could not find the requested resource (get pods dns-test-cf00b363-8f92-11ea-b5fe-0242ac110017) May 6 12:12:44.176: INFO: Lookups using e2e-tests-dns-d8gj2/dns-test-cf00b363-8f92-11ea-b5fe-0242ac110017 failed for: [wheezy_udp@dns-test-service wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-d8gj2.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-d8gj2 jessie_tcp@dns-test-service.e2e-tests-dns-d8gj2 jessie_udp@dns-test-service.e2e-tests-dns-d8gj2.svc jessie_tcp@dns-test-service.e2e-tests-dns-d8gj2.svc jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-d8gj2.svc jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-d8gj2.svc] May 6 12:12:49.158: INFO: DNS probes using e2e-tests-dns-d8gj2/dns-test-cf00b363-8f92-11ea-b5fe-0242ac110017 succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 12:12:49.268: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-dns-d8gj2" for this suite. May 6 12:12:55.301: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 12:12:55.357: INFO: namespace: e2e-tests-dns-d8gj2, resource: bindings, ignored listing per whitelist May 6 12:12:55.375: INFO: namespace e2e-tests-dns-d8gj2 deletion completed in 6.101239474s • [SLOW TEST:46.622 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 12:12:55.375: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-8879x STEP: creating a selector STEP: Creating the service pods in kubernetes May 6 12:12:55.517: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods May 6 12:13:23.871: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.242:8080/dial?request=hostName&protocol=http&host=10.244.1.241&port=8080&tries=1'] Namespace:e2e-tests-pod-network-test-8879x PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 6 12:13:23.871: INFO: >>> kubeConfig: /root/.kube/config I0506 12:13:23.902259 7 log.go:172] (0xc002032420) (0xc002869c20) Create stream I0506 12:13:23.902291 7 log.go:172] (0xc002032420) (0xc002869c20) Stream added, broadcasting: 1 I0506 12:13:23.903821 7 log.go:172] (0xc002032420) Reply frame received for 1 I0506 12:13:23.903846 7 log.go:172] (0xc002032420) (0xc00185e780) Create stream I0506 12:13:23.903855 7 log.go:172] (0xc002032420) (0xc00185e780) Stream added, broadcasting: 3 I0506 12:13:23.904499 7 log.go:172] (0xc002032420) Reply frame received for 3 I0506 12:13:23.904528 7 log.go:172] (0xc002032420) (0xc002869cc0) Create stream I0506 12:13:23.904538 7 log.go:172] (0xc002032420) (0xc002869cc0) Stream added, broadcasting: 5 I0506 12:13:23.905610 7 log.go:172] (0xc002032420) Reply frame received for 5 I0506 12:13:24.048414 7 log.go:172] (0xc002032420) Data frame received for 3 I0506 12:13:24.048442 7 log.go:172] (0xc00185e780) (3) Data frame handling I0506 12:13:24.048463 7 log.go:172] (0xc00185e780) (3) Data frame sent I0506 12:13:24.049312 7 log.go:172] (0xc002032420) Data frame received for 5 I0506 12:13:24.049402 7 log.go:172] (0xc002869cc0) (5) Data frame handling I0506 12:13:24.049470 7 log.go:172] (0xc002032420) Data frame received for 3 I0506 12:13:24.049510 7 log.go:172] (0xc00185e780) (3) Data frame handling I0506 12:13:24.051437 7 log.go:172] (0xc002032420) Data frame received for 1 I0506 12:13:24.051472 7 log.go:172] (0xc002869c20) (1) Data frame handling I0506 12:13:24.051504 7 log.go:172] (0xc002869c20) (1) Data frame sent I0506 12:13:24.051521 7 log.go:172] (0xc002032420) (0xc002869c20) Stream removed, broadcasting: 1 I0506 12:13:24.051539 7 log.go:172] (0xc002032420) Go away received I0506 12:13:24.051702 7 log.go:172] (0xc002032420) (0xc002869c20) Stream removed, broadcasting: 1 I0506 12:13:24.051726 7 log.go:172] (0xc002032420) (0xc00185e780) Stream removed, broadcasting: 3 I0506 12:13:24.051738 7 log.go:172] (0xc002032420) (0xc002869cc0) Stream removed, broadcasting: 5 May 6 12:13:24.051: INFO: Waiting for endpoints: map[] May 6 12:13:24.055: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.242:8080/dial?request=hostName&protocol=http&host=10.244.2.197&port=8080&tries=1'] Namespace:e2e-tests-pod-network-test-8879x PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 6 12:13:24.055: INFO: >>> kubeConfig: /root/.kube/config I0506 12:13:24.086531 7 log.go:172] (0xc0020328f0) (0xc001cc20a0) Create stream I0506 12:13:24.086572 7 log.go:172] (0xc0020328f0) (0xc001cc20a0) Stream added, broadcasting: 1 I0506 12:13:24.088638 7 log.go:172] (0xc0020328f0) Reply frame received for 1 I0506 12:13:24.088682 7 log.go:172] (0xc0020328f0) (0xc0026fe000) Create stream I0506 12:13:24.088696 7 log.go:172] (0xc0020328f0) (0xc0026fe000) Stream added, broadcasting: 3 I0506 12:13:24.089792 7 log.go:172] (0xc0020328f0) Reply frame received for 3 I0506 12:13:24.089825 7 log.go:172] (0xc0020328f0) (0xc00185e820) Create stream I0506 12:13:24.089835 7 log.go:172] (0xc0020328f0) (0xc00185e820) Stream added, broadcasting: 5 I0506 12:13:24.090716 7 log.go:172] (0xc0020328f0) Reply frame received for 5 I0506 12:13:24.163650 7 log.go:172] (0xc0020328f0) Data frame received for 3 I0506 12:13:24.163681 7 log.go:172] (0xc0026fe000) (3) Data frame handling I0506 12:13:24.163702 7 log.go:172] (0xc0026fe000) (3) Data frame sent I0506 12:13:24.164156 7 log.go:172] (0xc0020328f0) Data frame received for 3 I0506 12:13:24.164181 7 log.go:172] (0xc0026fe000) (3) Data frame handling I0506 12:13:24.164245 7 log.go:172] (0xc0020328f0) Data frame received for 5 I0506 12:13:24.164260 7 log.go:172] (0xc00185e820) (5) Data frame handling I0506 12:13:24.166088 7 log.go:172] (0xc0020328f0) Data frame received for 1 I0506 12:13:24.166140 7 log.go:172] (0xc001cc20a0) (1) Data frame handling I0506 12:13:24.166183 7 log.go:172] (0xc001cc20a0) (1) Data frame sent I0506 12:13:24.166204 7 log.go:172] (0xc0020328f0) (0xc001cc20a0) Stream removed, broadcasting: 1 I0506 12:13:24.166229 7 log.go:172] (0xc0020328f0) Go away received I0506 12:13:24.166482 7 log.go:172] (0xc0020328f0) (0xc001cc20a0) Stream removed, broadcasting: 1 I0506 12:13:24.166501 7 log.go:172] (0xc0020328f0) (0xc0026fe000) Stream removed, broadcasting: 3 I0506 12:13:24.166518 7 log.go:172] (0xc0020328f0) (0xc00185e820) Stream removed, broadcasting: 5 May 6 12:13:24.166: INFO: Waiting for endpoints: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 12:13:24.166: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pod-network-test-8879x" for this suite. May 6 12:13:52.571: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 12:13:52.638: INFO: namespace: e2e-tests-pod-network-test-8879x, resource: bindings, ignored listing per whitelist May 6 12:13:52.645: INFO: namespace e2e-tests-pod-network-test-8879x deletion completed in 28.241336076s • [SLOW TEST:57.271 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 12:13:52.646: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted May 6 12:13:59.500: INFO: 10 pods remaining May 6 12:13:59.500: INFO: 10 pods has nil DeletionTimestamp May 6 12:13:59.500: INFO: May 6 12:14:01.422: INFO: 9 pods remaining May 6 12:14:01.422: INFO: 7 pods has nil DeletionTimestamp May 6 12:14:01.422: INFO: May 6 12:14:03.365: INFO: 0 pods remaining May 6 12:14:03.365: INFO: 0 pods has nil DeletionTimestamp May 6 12:14:03.365: INFO: STEP: Gathering metrics W0506 12:14:05.906459 7 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 6 12:14:05.906: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 12:14:05.906: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-75cl9" for this suite. May 6 12:14:23.002: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 12:14:23.028: INFO: namespace: e2e-tests-gc-75cl9, resource: bindings, ignored listing per whitelist May 6 12:14:23.058: INFO: namespace e2e-tests-gc-75cl9 deletion completed in 16.338104563s • [SLOW TEST:30.412 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 12:14:23.058: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name s-test-opt-del-1f9c1193-8f93-11ea-b5fe-0242ac110017 STEP: Creating secret with name s-test-opt-upd-1f9c11e3-8f93-11ea-b5fe-0242ac110017 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-1f9c1193-8f93-11ea-b5fe-0242ac110017 STEP: Updating secret s-test-opt-upd-1f9c11e3-8f93-11ea-b5fe-0242ac110017 STEP: Creating secret with name s-test-opt-create-1f9c11fc-8f93-11ea-b5fe-0242ac110017 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 12:15:46.286: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-lmhwv" for this suite. May 6 12:16:10.375: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 12:16:10.400: INFO: namespace: e2e-tests-secrets-lmhwv, resource: bindings, ignored listing per whitelist May 6 12:16:10.447: INFO: namespace e2e-tests-secrets-lmhwv deletion completed in 24.159780939s • [SLOW TEST:107.390 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 12:16:10.448: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102 [It] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. May 6 12:16:10.679: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 12:16:10.682: INFO: Number of nodes with available pods: 0 May 6 12:16:10.682: INFO: Node hunter-worker is running more than one daemon pod May 6 12:16:11.687: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 12:16:11.690: INFO: Number of nodes with available pods: 0 May 6 12:16:11.690: INFO: Node hunter-worker is running more than one daemon pod May 6 12:16:12.810: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 12:16:13.238: INFO: Number of nodes with available pods: 0 May 6 12:16:13.238: INFO: Node hunter-worker is running more than one daemon pod May 6 12:16:13.932: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 12:16:14.084: INFO: Number of nodes with available pods: 0 May 6 12:16:14.084: INFO: Node hunter-worker is running more than one daemon pod May 6 12:16:14.739: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 12:16:14.751: INFO: Number of nodes with available pods: 0 May 6 12:16:14.751: INFO: Node hunter-worker is running more than one daemon pod May 6 12:16:15.686: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 12:16:15.688: INFO: Number of nodes with available pods: 1 May 6 12:16:15.688: INFO: Node hunter-worker is running more than one daemon pod May 6 12:16:16.687: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 12:16:16.691: INFO: Number of nodes with available pods: 2 May 6 12:16:16.691: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. May 6 12:16:16.753: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 12:16:16.846: INFO: Number of nodes with available pods: 2 May 6 12:16:16.846: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Wait for the failed daemon pod to be completely deleted. [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-2cnsx, will wait for the garbage collector to delete the pods May 6 12:16:18.118: INFO: Deleting DaemonSet.extensions daemon-set took: 169.861748ms May 6 12:16:18.518: INFO: Terminating DaemonSet.extensions daemon-set pods took: 400.237929ms May 6 12:16:31.321: INFO: Number of nodes with available pods: 0 May 6 12:16:31.321: INFO: Number of running nodes: 0, number of available pods: 0 May 6 12:16:31.323: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-2cnsx/daemonsets","resourceVersion":"9046082"},"items":null} May 6 12:16:31.326: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-2cnsx/pods","resourceVersion":"9046082"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 12:16:31.342: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-daemonsets-2cnsx" for this suite. May 6 12:16:37.375: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 12:16:37.445: INFO: namespace: e2e-tests-daemonsets-2cnsx, resource: bindings, ignored listing per whitelist May 6 12:16:37.460: INFO: namespace e2e-tests-daemonsets-2cnsx deletion completed in 6.116418844s • [SLOW TEST:27.013 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 12:16:37.461: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65 [It] RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 6 12:16:37.603: INFO: Creating deployment "test-recreate-deployment" May 6 12:16:37.615: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1 May 6 12:16:37.636: INFO: new replicaset for deployment "test-recreate-deployment" is yet to be created May 6 12:16:39.935: INFO: Waiting deployment "test-recreate-deployment" to complete May 6 12:16:40.163: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724364197, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724364197, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724364197, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724364197, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)} May 6 12:16:42.167: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724364197, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724364197, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724364197, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724364197, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)} May 6 12:16:44.166: INFO: Triggering a new rollout for deployment "test-recreate-deployment" May 6 12:16:44.173: INFO: Updating deployment test-recreate-deployment May 6 12:16:44.173: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59 May 6 12:16:44.913: INFO: Deployment "test-recreate-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment,GenerateName:,Namespace:e2e-tests-deployment-c92th,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-c92th/deployments/test-recreate-deployment,UID:6f2792ab-8f93-11ea-99e8-0242ac110002,ResourceVersion:9046175,Generation:2,CreationTimestamp:2020-05-06 12:16:37 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[{Available False 2020-05-06 12:16:44 +0000 UTC 2020-05-06 12:16:44 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2020-05-06 12:16:44 +0000 UTC 2020-05-06 12:16:37 +0000 UTC ReplicaSetUpdated ReplicaSet "test-recreate-deployment-589c4bfd" is progressing.}],ReadyReplicas:0,CollisionCount:nil,},} May 6 12:16:44.918: INFO: New ReplicaSet "test-recreate-deployment-589c4bfd" of Deployment "test-recreate-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-589c4bfd,GenerateName:,Namespace:e2e-tests-deployment-c92th,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-c92th/replicasets/test-recreate-deployment-589c4bfd,UID:731f5a8c-8f93-11ea-99e8-0242ac110002,ResourceVersion:9046173,Generation:1,CreationTimestamp:2020-05-06 12:16:44 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment 6f2792ab-8f93-11ea-99e8-0242ac110002 0xc001f95dcf 0xc001f95eb0}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} May 6 12:16:44.918: INFO: All old ReplicaSets of Deployment "test-recreate-deployment": May 6 12:16:44.918: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5bf7f65dc,GenerateName:,Namespace:e2e-tests-deployment-c92th,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-c92th/replicasets/test-recreate-deployment-5bf7f65dc,UID:6f2c44f3-8f93-11ea-99e8-0242ac110002,ResourceVersion:9046163,Generation:2,CreationTimestamp:2020-05-06 12:16:37 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5bf7f65dc,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment 6f2792ab-8f93-11ea-99e8-0242ac110002 0xc0024f0010 0xc0024f0011}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5bf7f65dc,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5bf7f65dc,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} May 6 12:16:44.921: INFO: Pod "test-recreate-deployment-589c4bfd-6dksb" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-589c4bfd-6dksb,GenerateName:test-recreate-deployment-589c4bfd-,Namespace:e2e-tests-deployment-c92th,SelfLink:/api/v1/namespaces/e2e-tests-deployment-c92th/pods/test-recreate-deployment-589c4bfd-6dksb,UID:732216c2-8f93-11ea-99e8-0242ac110002,ResourceVersion:9046176,Generation:0,CreationTimestamp:2020-05-06 12:16:44 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-recreate-deployment-589c4bfd 731f5a8c-8f93-11ea-99e8-0242ac110002 0xc0024f188f 0xc0024f18a0}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-rz79b {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rz79b,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-rz79b true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0024f1b20} {node.kubernetes.io/unreachable Exists NoExecute 0xc0024f1b40}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 12:16:44 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-06 12:16:44 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-06 12:16:44 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 12:16:44 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.3,PodIP:,StartTime:2020-05-06 12:16:44 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 12:16:44.921: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-deployment-c92th" for this suite. May 6 12:16:50.948: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 12:16:50.971: INFO: namespace: e2e-tests-deployment-c92th, resource: bindings, ignored listing per whitelist May 6 12:16:51.018: INFO: namespace e2e-tests-deployment-c92th deletion completed in 6.093775718s • [SLOW TEST:13.557 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 12:16:51.019: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-776f5313-8f93-11ea-b5fe-0242ac110017 STEP: Creating a pod to test consume configMaps May 6 12:16:51.609: INFO: Waiting up to 5m0s for pod "pod-configmaps-7772bff5-8f93-11ea-b5fe-0242ac110017" in namespace "e2e-tests-configmap-7x9q8" to be "success or failure" May 6 12:16:51.621: INFO: Pod "pod-configmaps-7772bff5-8f93-11ea-b5fe-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 11.715096ms May 6 12:16:53.702: INFO: Pod "pod-configmaps-7772bff5-8f93-11ea-b5fe-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.093002127s May 6 12:16:55.710: INFO: Pod "pod-configmaps-7772bff5-8f93-11ea-b5fe-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.101387818s STEP: Saw pod success May 6 12:16:55.711: INFO: Pod "pod-configmaps-7772bff5-8f93-11ea-b5fe-0242ac110017" satisfied condition "success or failure" May 6 12:16:55.712: INFO: Trying to get logs from node hunter-worker2 pod pod-configmaps-7772bff5-8f93-11ea-b5fe-0242ac110017 container configmap-volume-test: STEP: delete the pod May 6 12:16:55.848: INFO: Waiting for pod pod-configmaps-7772bff5-8f93-11ea-b5fe-0242ac110017 to disappear May 6 12:16:55.854: INFO: Pod pod-configmaps-7772bff5-8f93-11ea-b5fe-0242ac110017 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 12:16:55.854: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-7x9q8" for this suite. May 6 12:17:01.876: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 12:17:01.886: INFO: namespace: e2e-tests-configmap-7x9q8, resource: bindings, ignored listing per whitelist May 6 12:17:01.951: INFO: namespace e2e-tests-configmap-7x9q8 deletion completed in 6.093653634s • [SLOW TEST:10.932 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 12:17:01.951: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook May 6 12:17:12.175: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 6 12:17:12.202: INFO: Pod pod-with-poststart-exec-hook still exists May 6 12:17:14.203: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 6 12:17:14.206: INFO: Pod pod-with-poststart-exec-hook still exists May 6 12:17:16.203: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 6 12:17:16.229: INFO: Pod pod-with-poststart-exec-hook still exists May 6 12:17:18.203: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 6 12:17:18.206: INFO: Pod pod-with-poststart-exec-hook still exists May 6 12:17:20.203: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 6 12:17:20.206: INFO: Pod pod-with-poststart-exec-hook still exists May 6 12:17:22.203: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 6 12:17:22.206: INFO: Pod pod-with-poststart-exec-hook still exists May 6 12:17:24.203: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 6 12:17:24.206: INFO: Pod pod-with-poststart-exec-hook still exists May 6 12:17:26.203: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 6 12:17:26.218: INFO: Pod pod-with-poststart-exec-hook still exists May 6 12:17:28.203: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 6 12:17:28.216: INFO: Pod pod-with-poststart-exec-hook still exists May 6 12:17:30.203: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 6 12:17:30.206: INFO: Pod pod-with-poststart-exec-hook still exists May 6 12:17:32.203: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 6 12:17:32.206: INFO: Pod pod-with-poststart-exec-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 12:17:32.206: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-ltvls" for this suite. May 6 12:17:54.223: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 12:17:54.262: INFO: namespace: e2e-tests-container-lifecycle-hook-ltvls, resource: bindings, ignored listing per whitelist May 6 12:17:54.283: INFO: namespace e2e-tests-container-lifecycle-hook-ltvls deletion completed in 22.07387385s • [SLOW TEST:52.332 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40 should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 12:17:54.283: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should write entries to /etc/hosts [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 12:17:58.395: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubelet-test-r4k7k" for this suite. May 6 12:18:44.415: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 12:18:44.448: INFO: namespace: e2e-tests-kubelet-test-r4k7k, resource: bindings, ignored listing per whitelist May 6 12:18:44.476: INFO: namespace e2e-tests-kubelet-test-r4k7k deletion completed in 46.077080665s • [SLOW TEST:50.193 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when scheduling a busybox Pod with hostAliases /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:136 should write entries to /etc/hosts [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 12:18:44.476: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with configmap pod with mountPath of existing file [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod pod-subpath-test-configmap-h8n7 STEP: Creating a pod to test atomic-volume-subpath May 6 12:18:44.600: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-h8n7" in namespace "e2e-tests-subpath-vtbsz" to be "success or failure" May 6 12:18:44.630: INFO: Pod "pod-subpath-test-configmap-h8n7": Phase="Pending", Reason="", readiness=false. Elapsed: 29.838608ms May 6 12:18:46.635: INFO: Pod "pod-subpath-test-configmap-h8n7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03442288s May 6 12:18:48.649: INFO: Pod "pod-subpath-test-configmap-h8n7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.04876554s May 6 12:18:50.652: INFO: Pod "pod-subpath-test-configmap-h8n7": Phase="Pending", Reason="", readiness=false. Elapsed: 6.051662307s May 6 12:18:52.655: INFO: Pod "pod-subpath-test-configmap-h8n7": Phase="Running", Reason="", readiness=false. Elapsed: 8.055014949s May 6 12:18:54.660: INFO: Pod "pod-subpath-test-configmap-h8n7": Phase="Running", Reason="", readiness=false. Elapsed: 10.059108317s May 6 12:18:56.764: INFO: Pod "pod-subpath-test-configmap-h8n7": Phase="Running", Reason="", readiness=false. Elapsed: 12.163110051s May 6 12:18:58.766: INFO: Pod "pod-subpath-test-configmap-h8n7": Phase="Running", Reason="", readiness=false. Elapsed: 14.165972155s May 6 12:19:00.771: INFO: Pod "pod-subpath-test-configmap-h8n7": Phase="Running", Reason="", readiness=false. Elapsed: 16.17044879s May 6 12:19:02.774: INFO: Pod "pod-subpath-test-configmap-h8n7": Phase="Running", Reason="", readiness=false. Elapsed: 18.173973939s May 6 12:19:04.804: INFO: Pod "pod-subpath-test-configmap-h8n7": Phase="Running", Reason="", readiness=false. Elapsed: 20.203554348s May 6 12:19:06.811: INFO: Pod "pod-subpath-test-configmap-h8n7": Phase="Running", Reason="", readiness=false. Elapsed: 22.210629669s May 6 12:19:08.816: INFO: Pod "pod-subpath-test-configmap-h8n7": Phase="Running", Reason="", readiness=false. Elapsed: 24.215126219s May 6 12:19:10.820: INFO: Pod "pod-subpath-test-configmap-h8n7": Phase="Running", Reason="", readiness=false. Elapsed: 26.21951032s May 6 12:19:12.824: INFO: Pod "pod-subpath-test-configmap-h8n7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 28.22337004s STEP: Saw pod success May 6 12:19:12.824: INFO: Pod "pod-subpath-test-configmap-h8n7" satisfied condition "success or failure" May 6 12:19:12.827: INFO: Trying to get logs from node hunter-worker2 pod pod-subpath-test-configmap-h8n7 container test-container-subpath-configmap-h8n7: STEP: delete the pod May 6 12:19:13.079: INFO: Waiting for pod pod-subpath-test-configmap-h8n7 to disappear May 6 12:19:13.108: INFO: Pod pod-subpath-test-configmap-h8n7 no longer exists STEP: Deleting pod pod-subpath-test-configmap-h8n7 May 6 12:19:13.109: INFO: Deleting pod "pod-subpath-test-configmap-h8n7" in namespace "e2e-tests-subpath-vtbsz" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 12:19:13.111: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-subpath-vtbsz" for this suite. May 6 12:19:19.130: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 12:19:19.139: INFO: namespace: e2e-tests-subpath-vtbsz, resource: bindings, ignored listing per whitelist May 6 12:19:19.186: INFO: namespace e2e-tests-subpath-vtbsz deletion completed in 6.072443613s • [SLOW TEST:34.710 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with configmap pod with mountPath of existing file [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run job should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 12:19:19.186: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1454 [It] should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine May 6 12:19:19.345: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=e2e-tests-kubectl-cgsvb' May 6 12:19:21.821: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" May 6 12:19:21.821: INFO: stdout: "job.batch/e2e-test-nginx-job created\n" STEP: verifying the job e2e-test-nginx-job was created [AfterEach] [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1459 May 6 12:19:21.859: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete jobs e2e-test-nginx-job --namespace=e2e-tests-kubectl-cgsvb' May 6 12:19:21.992: INFO: stderr: "" May 6 12:19:21.992: INFO: stdout: "job.batch \"e2e-test-nginx-job\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 12:19:21.992: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-cgsvb" for this suite. May 6 12:19:28.069: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 12:19:28.079: INFO: namespace: e2e-tests-kubectl-cgsvb, resource: bindings, ignored listing per whitelist May 6 12:19:28.146: INFO: namespace e2e-tests-kubectl-cgsvb deletion completed in 6.150362337s • [SLOW TEST:8.960 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run default should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 12:19:28.146: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1262 [It] should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine May 6 12:19:28.267: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --namespace=e2e-tests-kubectl-4k48j' May 6 12:19:28.371: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" May 6 12:19:28.371: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n" STEP: verifying the pod controlled by e2e-test-nginx-deployment gets created [AfterEach] [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1268 May 6 12:19:30.480: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=e2e-tests-kubectl-4k48j' May 6 12:19:30.906: INFO: stderr: "" May 6 12:19:30.906: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 12:19:30.906: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-4k48j" for this suite. May 6 12:19:52.937: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 12:19:52.953: INFO: namespace: e2e-tests-kubectl-4k48j, resource: bindings, ignored listing per whitelist May 6 12:19:52.996: INFO: namespace e2e-tests-kubectl-4k48j deletion completed in 22.071991758s • [SLOW TEST:24.850 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 12:19:52.996: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod liveness-exec in namespace e2e-tests-container-probe-pw2m8 May 6 12:19:57.119: INFO: Started pod liveness-exec in namespace e2e-tests-container-probe-pw2m8 STEP: checking the pod's current state and verifying that restartCount is present May 6 12:19:57.122: INFO: Initial restart count of pod liveness-exec is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 12:23:57.955: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-pw2m8" for this suite. May 6 12:24:03.992: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 12:24:04.015: INFO: namespace: e2e-tests-container-probe-pw2m8, resource: bindings, ignored listing per whitelist May 6 12:24:04.051: INFO: namespace e2e-tests-container-probe-pw2m8 deletion completed in 6.079634022s • [SLOW TEST:251.055 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 12:24:04.051: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin May 6 12:24:04.262: INFO: Waiting up to 5m0s for pod "downwardapi-volume-795d4b46-8f94-11ea-b5fe-0242ac110017" in namespace "e2e-tests-projected-lhtr7" to be "success or failure" May 6 12:24:04.294: INFO: Pod "downwardapi-volume-795d4b46-8f94-11ea-b5fe-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 32.268438ms May 6 12:24:06.298: INFO: Pod "downwardapi-volume-795d4b46-8f94-11ea-b5fe-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.035980727s May 6 12:24:08.302: INFO: Pod "downwardapi-volume-795d4b46-8f94-11ea-b5fe-0242ac110017": Phase="Running", Reason="", readiness=true. Elapsed: 4.040585769s May 6 12:24:10.307: INFO: Pod "downwardapi-volume-795d4b46-8f94-11ea-b5fe-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.044728528s STEP: Saw pod success May 6 12:24:10.307: INFO: Pod "downwardapi-volume-795d4b46-8f94-11ea-b5fe-0242ac110017" satisfied condition "success or failure" May 6 12:24:10.309: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-795d4b46-8f94-11ea-b5fe-0242ac110017 container client-container: STEP: delete the pod May 6 12:24:10.338: INFO: Waiting for pod downwardapi-volume-795d4b46-8f94-11ea-b5fe-0242ac110017 to disappear May 6 12:24:10.340: INFO: Pod downwardapi-volume-795d4b46-8f94-11ea-b5fe-0242ac110017 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 12:24:10.340: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-lhtr7" for this suite. May 6 12:24:16.362: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 12:24:16.400: INFO: namespace: e2e-tests-projected-lhtr7, resource: bindings, ignored listing per whitelist May 6 12:24:16.440: INFO: namespace e2e-tests-projected-lhtr7 deletion completed in 6.096606818s • [SLOW TEST:12.389 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 12:24:16.441: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name projected-secret-test-80bd63b0-8f94-11ea-b5fe-0242ac110017 STEP: Creating a pod to test consume secrets May 6 12:24:16.680: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-80c2d658-8f94-11ea-b5fe-0242ac110017" in namespace "e2e-tests-projected-vdqkl" to be "success or failure" May 6 12:24:16.732: INFO: Pod "pod-projected-secrets-80c2d658-8f94-11ea-b5fe-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 51.289869ms May 6 12:24:18.735: INFO: Pod "pod-projected-secrets-80c2d658-8f94-11ea-b5fe-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.054451267s May 6 12:24:20.738: INFO: Pod "pod-projected-secrets-80c2d658-8f94-11ea-b5fe-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 4.057846172s May 6 12:24:22.741: INFO: Pod "pod-projected-secrets-80c2d658-8f94-11ea-b5fe-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.06095199s STEP: Saw pod success May 6 12:24:22.741: INFO: Pod "pod-projected-secrets-80c2d658-8f94-11ea-b5fe-0242ac110017" satisfied condition "success or failure" May 6 12:24:22.743: INFO: Trying to get logs from node hunter-worker pod pod-projected-secrets-80c2d658-8f94-11ea-b5fe-0242ac110017 container secret-volume-test: STEP: delete the pod May 6 12:24:22.788: INFO: Waiting for pod pod-projected-secrets-80c2d658-8f94-11ea-b5fe-0242ac110017 to disappear May 6 12:24:22.816: INFO: Pod pod-projected-secrets-80c2d658-8f94-11ea-b5fe-0242ac110017 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 12:24:22.816: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-vdqkl" for this suite. May 6 12:24:28.831: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 12:24:28.871: INFO: namespace: e2e-tests-projected-vdqkl, resource: bindings, ignored listing per whitelist May 6 12:24:28.910: INFO: namespace e2e-tests-projected-vdqkl deletion completed in 6.09054779s • [SLOW TEST:12.469 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 12:24:28.910: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 12:25:29.020: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-mjssh" for this suite. May 6 12:25:51.032: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 12:25:51.052: INFO: namespace: e2e-tests-container-probe-mjssh, resource: bindings, ignored listing per whitelist May 6 12:25:51.102: INFO: namespace e2e-tests-container-probe-mjssh deletion completed in 22.079389759s • [SLOW TEST:82.192 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 12:25:51.102: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test substitution in container's command May 6 12:25:51.333: INFO: Waiting up to 5m0s for pod "var-expansion-b9310def-8f94-11ea-b5fe-0242ac110017" in namespace "e2e-tests-var-expansion-q68wf" to be "success or failure" May 6 12:25:51.338: INFO: Pod "var-expansion-b9310def-8f94-11ea-b5fe-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 5.307067ms May 6 12:25:53.398: INFO: Pod "var-expansion-b9310def-8f94-11ea-b5fe-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.064606677s May 6 12:25:55.434: INFO: Pod "var-expansion-b9310def-8f94-11ea-b5fe-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.100760613s STEP: Saw pod success May 6 12:25:55.434: INFO: Pod "var-expansion-b9310def-8f94-11ea-b5fe-0242ac110017" satisfied condition "success or failure" May 6 12:25:55.436: INFO: Trying to get logs from node hunter-worker pod var-expansion-b9310def-8f94-11ea-b5fe-0242ac110017 container dapi-container: STEP: delete the pod May 6 12:25:55.638: INFO: Waiting for pod var-expansion-b9310def-8f94-11ea-b5fe-0242ac110017 to disappear May 6 12:25:55.644: INFO: Pod var-expansion-b9310def-8f94-11ea-b5fe-0242ac110017 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 12:25:55.645: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-var-expansion-q68wf" for this suite. May 6 12:26:01.686: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 12:26:01.735: INFO: namespace: e2e-tests-var-expansion-q68wf, resource: bindings, ignored listing per whitelist May 6 12:26:01.762: INFO: namespace e2e-tests-var-expansion-q68wf deletion completed in 6.110999804s • [SLOW TEST:10.659 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 12:26:01.762: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename prestop STEP: Waiting for a default service account to be provisioned in namespace [It] should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating server pod server in namespace e2e-tests-prestop-m5b4q STEP: Waiting for pods to come up. STEP: Creating tester pod tester in namespace e2e-tests-prestop-m5b4q STEP: Deleting pre-stop pod May 6 12:26:14.921: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": { "prestop": 1 }, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } STEP: Deleting the server pod [AfterEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 12:26:14.925: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-prestop-m5b4q" for this suite. May 6 12:26:52.938: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 12:26:52.944: INFO: namespace: e2e-tests-prestop-m5b4q, resource: bindings, ignored listing per whitelist May 6 12:26:53.001: INFO: namespace e2e-tests-prestop-m5b4q deletion completed in 38.072026154s • [SLOW TEST:51.240 seconds] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 12:26:53.002: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin May 6 12:26:53.080: INFO: Waiting up to 5m0s for pod "downwardapi-volume-de00dc11-8f94-11ea-b5fe-0242ac110017" in namespace "e2e-tests-downward-api-vmn9z" to be "success or failure" May 6 12:26:53.139: INFO: Pod "downwardapi-volume-de00dc11-8f94-11ea-b5fe-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 58.660419ms May 6 12:26:55.142: INFO: Pod "downwardapi-volume-de00dc11-8f94-11ea-b5fe-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.061630404s May 6 12:26:57.189: INFO: Pod "downwardapi-volume-de00dc11-8f94-11ea-b5fe-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 4.108208781s May 6 12:26:59.193: INFO: Pod "downwardapi-volume-de00dc11-8f94-11ea-b5fe-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.113095259s STEP: Saw pod success May 6 12:26:59.193: INFO: Pod "downwardapi-volume-de00dc11-8f94-11ea-b5fe-0242ac110017" satisfied condition "success or failure" May 6 12:26:59.196: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-de00dc11-8f94-11ea-b5fe-0242ac110017 container client-container: STEP: delete the pod May 6 12:26:59.237: INFO: Waiting for pod downwardapi-volume-de00dc11-8f94-11ea-b5fe-0242ac110017 to disappear May 6 12:26:59.272: INFO: Pod downwardapi-volume-de00dc11-8f94-11ea-b5fe-0242ac110017 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 12:26:59.272: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-vmn9z" for this suite. May 6 12:27:05.336: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 12:27:05.358: INFO: namespace: e2e-tests-downward-api-vmn9z, resource: bindings, ignored listing per whitelist May 6 12:27:05.394: INFO: namespace e2e-tests-downward-api-vmn9z deletion completed in 6.119016109s • [SLOW TEST:12.393 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 12:27:05.395: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod May 6 12:27:10.035: INFO: Successfully updated pod "pod-update-activedeadlineseconds-e569f195-8f94-11ea-b5fe-0242ac110017" May 6 12:27:10.035: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-e569f195-8f94-11ea-b5fe-0242ac110017" in namespace "e2e-tests-pods-4l5lz" to be "terminated due to deadline exceeded" May 6 12:27:10.078: INFO: Pod "pod-update-activedeadlineseconds-e569f195-8f94-11ea-b5fe-0242ac110017": Phase="Running", Reason="", readiness=true. Elapsed: 42.498692ms May 6 12:27:12.190: INFO: Pod "pod-update-activedeadlineseconds-e569f195-8f94-11ea-b5fe-0242ac110017": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.154575431s May 6 12:27:12.190: INFO: Pod "pod-update-activedeadlineseconds-e569f195-8f94-11ea-b5fe-0242ac110017" satisfied condition "terminated due to deadline exceeded" [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 12:27:12.190: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-4l5lz" for this suite. May 6 12:27:18.230: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 12:27:18.304: INFO: namespace: e2e-tests-pods-4l5lz, resource: bindings, ignored listing per whitelist May 6 12:27:18.354: INFO: namespace e2e-tests-pods-4l5lz deletion completed in 6.136989275s • [SLOW TEST:12.959 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl patch should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 12:27:18.354: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating Redis RC May 6 12:27:18.567: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-rqvm8' May 6 12:27:18.910: INFO: stderr: "" May 6 12:27:18.910: INFO: stdout: "replicationcontroller/redis-master created\n" STEP: Waiting for Redis master to start. May 6 12:27:19.915: INFO: Selector matched 1 pods for map[app:redis] May 6 12:27:19.915: INFO: Found 0 / 1 May 6 12:27:20.914: INFO: Selector matched 1 pods for map[app:redis] May 6 12:27:20.914: INFO: Found 0 / 1 May 6 12:27:21.915: INFO: Selector matched 1 pods for map[app:redis] May 6 12:27:21.915: INFO: Found 0 / 1 May 6 12:27:22.915: INFO: Selector matched 1 pods for map[app:redis] May 6 12:27:22.915: INFO: Found 1 / 1 May 6 12:27:22.915: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 STEP: patching all pods May 6 12:27:22.918: INFO: Selector matched 1 pods for map[app:redis] May 6 12:27:22.918: INFO: ForEach: Found 1 pods from the filter. Now looping through them. May 6 12:27:22.918: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config patch pod redis-master-468rg --namespace=e2e-tests-kubectl-rqvm8 -p {"metadata":{"annotations":{"x":"y"}}}' May 6 12:27:23.010: INFO: stderr: "" May 6 12:27:23.010: INFO: stdout: "pod/redis-master-468rg patched\n" STEP: checking annotations May 6 12:27:23.066: INFO: Selector matched 1 pods for map[app:redis] May 6 12:27:23.066: INFO: ForEach: Found 1 pods from the filter. Now looping through them. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 12:27:23.066: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-rqvm8" for this suite. May 6 12:27:45.082: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 12:27:45.114: INFO: namespace: e2e-tests-kubectl-rqvm8, resource: bindings, ignored listing per whitelist May 6 12:27:45.143: INFO: namespace e2e-tests-kubectl-rqvm8 deletion completed in 22.074085533s • [SLOW TEST:26.789 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl patch /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 12:27:45.144: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-c7xjv STEP: creating a selector STEP: Creating the service pods in kubernetes May 6 12:27:45.282: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods May 6 12:28:11.459: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.1.9:8080/hostName | grep -v '^\s*$'] Namespace:e2e-tests-pod-network-test-c7xjv PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 6 12:28:11.459: INFO: >>> kubeConfig: /root/.kube/config I0506 12:28:11.487672 7 log.go:172] (0xc000a8c4d0) (0xc000b9bf40) Create stream I0506 12:28:11.487695 7 log.go:172] (0xc000a8c4d0) (0xc000b9bf40) Stream added, broadcasting: 1 I0506 12:28:11.489433 7 log.go:172] (0xc000a8c4d0) Reply frame received for 1 I0506 12:28:11.489457 7 log.go:172] (0xc000a8c4d0) (0xc0020460a0) Create stream I0506 12:28:11.489465 7 log.go:172] (0xc000a8c4d0) (0xc0020460a0) Stream added, broadcasting: 3 I0506 12:28:11.490218 7 log.go:172] (0xc000a8c4d0) Reply frame received for 3 I0506 12:28:11.490246 7 log.go:172] (0xc000a8c4d0) (0xc001cbc320) Create stream I0506 12:28:11.490254 7 log.go:172] (0xc000a8c4d0) (0xc001cbc320) Stream added, broadcasting: 5 I0506 12:28:11.491076 7 log.go:172] (0xc000a8c4d0) Reply frame received for 5 I0506 12:28:11.555385 7 log.go:172] (0xc000a8c4d0) Data frame received for 3 I0506 12:28:11.555419 7 log.go:172] (0xc0020460a0) (3) Data frame handling I0506 12:28:11.555432 7 log.go:172] (0xc0020460a0) (3) Data frame sent I0506 12:28:11.555446 7 log.go:172] (0xc000a8c4d0) Data frame received for 3 I0506 12:28:11.555451 7 log.go:172] (0xc0020460a0) (3) Data frame handling I0506 12:28:11.555481 7 log.go:172] (0xc000a8c4d0) Data frame received for 5 I0506 12:28:11.555501 7 log.go:172] (0xc001cbc320) (5) Data frame handling I0506 12:28:11.556756 7 log.go:172] (0xc000a8c4d0) Data frame received for 1 I0506 12:28:11.556792 7 log.go:172] (0xc000b9bf40) (1) Data frame handling I0506 12:28:11.556823 7 log.go:172] (0xc000b9bf40) (1) Data frame sent I0506 12:28:11.556853 7 log.go:172] (0xc000a8c4d0) (0xc000b9bf40) Stream removed, broadcasting: 1 I0506 12:28:11.556885 7 log.go:172] (0xc000a8c4d0) Go away received I0506 12:28:11.556960 7 log.go:172] (0xc000a8c4d0) (0xc000b9bf40) Stream removed, broadcasting: 1 I0506 12:28:11.556983 7 log.go:172] (0xc000a8c4d0) (0xc0020460a0) Stream removed, broadcasting: 3 I0506 12:28:11.557007 7 log.go:172] (0xc000a8c4d0) (0xc001cbc320) Stream removed, broadcasting: 5 May 6 12:28:11.557: INFO: Found all expected endpoints: [netserver-0] May 6 12:28:11.559: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.2.213:8080/hostName | grep -v '^\s*$'] Namespace:e2e-tests-pod-network-test-c7xjv PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 6 12:28:11.559: INFO: >>> kubeConfig: /root/.kube/config I0506 12:28:11.588421 7 log.go:172] (0xc000a8ca50) (0xc002046500) Create stream I0506 12:28:11.588441 7 log.go:172] (0xc000a8ca50) (0xc002046500) Stream added, broadcasting: 1 I0506 12:28:11.590958 7 log.go:172] (0xc000a8ca50) Reply frame received for 1 I0506 12:28:11.591004 7 log.go:172] (0xc000a8ca50) (0xc0016f6460) Create stream I0506 12:28:11.591016 7 log.go:172] (0xc000a8ca50) (0xc0016f6460) Stream added, broadcasting: 3 I0506 12:28:11.592027 7 log.go:172] (0xc000a8ca50) Reply frame received for 3 I0506 12:28:11.592073 7 log.go:172] (0xc000a8ca50) (0xc002295720) Create stream I0506 12:28:11.592089 7 log.go:172] (0xc000a8ca50) (0xc002295720) Stream added, broadcasting: 5 I0506 12:28:11.592882 7 log.go:172] (0xc000a8ca50) Reply frame received for 5 I0506 12:28:11.656075 7 log.go:172] (0xc000a8ca50) Data frame received for 5 I0506 12:28:11.656104 7 log.go:172] (0xc002295720) (5) Data frame handling I0506 12:28:11.656148 7 log.go:172] (0xc000a8ca50) Data frame received for 3 I0506 12:28:11.656186 7 log.go:172] (0xc0016f6460) (3) Data frame handling I0506 12:28:11.656206 7 log.go:172] (0xc0016f6460) (3) Data frame sent I0506 12:28:11.656223 7 log.go:172] (0xc000a8ca50) Data frame received for 3 I0506 12:28:11.656230 7 log.go:172] (0xc0016f6460) (3) Data frame handling I0506 12:28:11.657813 7 log.go:172] (0xc000a8ca50) Data frame received for 1 I0506 12:28:11.657831 7 log.go:172] (0xc002046500) (1) Data frame handling I0506 12:28:11.657841 7 log.go:172] (0xc002046500) (1) Data frame sent I0506 12:28:11.657967 7 log.go:172] (0xc000a8ca50) (0xc002046500) Stream removed, broadcasting: 1 I0506 12:28:11.658040 7 log.go:172] (0xc000a8ca50) (0xc002046500) Stream removed, broadcasting: 1 I0506 12:28:11.658048 7 log.go:172] (0xc000a8ca50) (0xc0016f6460) Stream removed, broadcasting: 3 I0506 12:28:11.658054 7 log.go:172] (0xc000a8ca50) (0xc002295720) Stream removed, broadcasting: 5 May 6 12:28:11.658: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 12:28:11.658: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready I0506 12:28:11.658310 7 log.go:172] (0xc000a8ca50) Go away received STEP: Destroying namespace "e2e-tests-pod-network-test-c7xjv" for this suite. May 6 12:28:35.712: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 12:28:35.729: INFO: namespace: e2e-tests-pod-network-test-c7xjv, resource: bindings, ignored listing per whitelist May 6 12:28:35.824: INFO: namespace e2e-tests-pod-network-test-c7xjv deletion completed in 24.162636519s • [SLOW TEST:50.680 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for node-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 12:28:35.825: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102 [It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 6 12:28:35.959: INFO: Creating simple daemon set daemon-set STEP: Check that daemon pods launch on every node of the cluster. May 6 12:28:35.967: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 12:28:35.968: INFO: Number of nodes with available pods: 0 May 6 12:28:35.968: INFO: Node hunter-worker is running more than one daemon pod May 6 12:28:37.004: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 12:28:37.007: INFO: Number of nodes with available pods: 0 May 6 12:28:37.007: INFO: Node hunter-worker is running more than one daemon pod May 6 12:28:37.973: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 12:28:37.975: INFO: Number of nodes with available pods: 0 May 6 12:28:37.975: INFO: Node hunter-worker is running more than one daemon pod May 6 12:28:39.125: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 12:28:39.129: INFO: Number of nodes with available pods: 0 May 6 12:28:39.129: INFO: Node hunter-worker is running more than one daemon pod May 6 12:28:39.977: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 12:28:39.986: INFO: Number of nodes with available pods: 1 May 6 12:28:39.986: INFO: Node hunter-worker2 is running more than one daemon pod May 6 12:28:40.987: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 12:28:40.991: INFO: Number of nodes with available pods: 2 May 6 12:28:40.991: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Update daemon pods image. STEP: Check that daemon pods images are updated. May 6 12:28:41.167: INFO: Wrong image for pod: daemon-set-29njg. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 6 12:28:41.167: INFO: Wrong image for pod: daemon-set-2qjb9. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 6 12:28:41.209: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 12:28:42.213: INFO: Wrong image for pod: daemon-set-29njg. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 6 12:28:42.213: INFO: Wrong image for pod: daemon-set-2qjb9. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 6 12:28:42.216: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 12:28:43.214: INFO: Wrong image for pod: daemon-set-29njg. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 6 12:28:43.214: INFO: Wrong image for pod: daemon-set-2qjb9. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 6 12:28:43.219: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 12:28:44.255: INFO: Wrong image for pod: daemon-set-29njg. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 6 12:28:44.255: INFO: Wrong image for pod: daemon-set-2qjb9. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 6 12:28:44.255: INFO: Pod daemon-set-2qjb9 is not available May 6 12:28:44.259: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 12:28:45.214: INFO: Wrong image for pod: daemon-set-29njg. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 6 12:28:45.214: INFO: Wrong image for pod: daemon-set-2qjb9. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 6 12:28:45.214: INFO: Pod daemon-set-2qjb9 is not available May 6 12:28:45.218: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 12:28:46.213: INFO: Wrong image for pod: daemon-set-29njg. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 6 12:28:46.213: INFO: Wrong image for pod: daemon-set-2qjb9. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 6 12:28:46.213: INFO: Pod daemon-set-2qjb9 is not available May 6 12:28:46.216: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 12:28:47.212: INFO: Wrong image for pod: daemon-set-29njg. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 6 12:28:47.212: INFO: Wrong image for pod: daemon-set-2qjb9. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 6 12:28:47.212: INFO: Pod daemon-set-2qjb9 is not available May 6 12:28:47.216: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 12:28:48.213: INFO: Wrong image for pod: daemon-set-29njg. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 6 12:28:48.213: INFO: Wrong image for pod: daemon-set-2qjb9. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 6 12:28:48.213: INFO: Pod daemon-set-2qjb9 is not available May 6 12:28:48.217: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 12:28:49.212: INFO: Wrong image for pod: daemon-set-29njg. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 6 12:28:49.212: INFO: Wrong image for pod: daemon-set-2qjb9. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 6 12:28:49.212: INFO: Pod daemon-set-2qjb9 is not available May 6 12:28:49.215: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 12:28:50.213: INFO: Wrong image for pod: daemon-set-29njg. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 6 12:28:50.214: INFO: Wrong image for pod: daemon-set-2qjb9. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 6 12:28:50.214: INFO: Pod daemon-set-2qjb9 is not available May 6 12:28:50.217: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 12:28:51.213: INFO: Wrong image for pod: daemon-set-29njg. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 6 12:28:51.213: INFO: Wrong image for pod: daemon-set-2qjb9. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 6 12:28:51.213: INFO: Pod daemon-set-2qjb9 is not available May 6 12:28:51.217: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 12:28:52.214: INFO: Wrong image for pod: daemon-set-29njg. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 6 12:28:52.214: INFO: Pod daemon-set-nmvsx is not available May 6 12:28:52.217: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 12:28:53.214: INFO: Wrong image for pod: daemon-set-29njg. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 6 12:28:53.214: INFO: Pod daemon-set-nmvsx is not available May 6 12:28:53.218: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 12:28:54.395: INFO: Wrong image for pod: daemon-set-29njg. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 6 12:28:54.395: INFO: Pod daemon-set-nmvsx is not available May 6 12:28:54.416: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 12:28:55.213: INFO: Wrong image for pod: daemon-set-29njg. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 6 12:28:55.213: INFO: Pod daemon-set-nmvsx is not available May 6 12:28:55.218: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 12:28:56.245: INFO: Wrong image for pod: daemon-set-29njg. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 6 12:28:56.249: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 12:28:57.213: INFO: Wrong image for pod: daemon-set-29njg. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 6 12:28:57.215: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 12:28:58.213: INFO: Wrong image for pod: daemon-set-29njg. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 6 12:28:58.213: INFO: Pod daemon-set-29njg is not available May 6 12:28:58.216: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 12:28:59.216: INFO: Pod daemon-set-8gttx is not available May 6 12:28:59.220: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node STEP: Check that daemon pods are still running on every node of the cluster. May 6 12:28:59.223: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 12:28:59.225: INFO: Number of nodes with available pods: 1 May 6 12:28:59.225: INFO: Node hunter-worker2 is running more than one daemon pod May 6 12:29:00.280: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 12:29:00.283: INFO: Number of nodes with available pods: 1 May 6 12:29:00.283: INFO: Node hunter-worker2 is running more than one daemon pod May 6 12:29:01.268: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 12:29:01.271: INFO: Number of nodes with available pods: 1 May 6 12:29:01.271: INFO: Node hunter-worker2 is running more than one daemon pod May 6 12:29:02.230: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 12:29:02.233: INFO: Number of nodes with available pods: 1 May 6 12:29:02.233: INFO: Node hunter-worker2 is running more than one daemon pod May 6 12:29:03.228: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 12:29:03.230: INFO: Number of nodes with available pods: 2 May 6 12:29:03.230: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-28xm5, will wait for the garbage collector to delete the pods May 6 12:29:03.295: INFO: Deleting DaemonSet.extensions daemon-set took: 5.637847ms May 6 12:29:03.495: INFO: Terminating DaemonSet.extensions daemon-set pods took: 200.254052ms May 6 12:29:11.798: INFO: Number of nodes with available pods: 0 May 6 12:29:11.798: INFO: Number of running nodes: 0, number of available pods: 0 May 6 12:29:11.848: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-28xm5/daemonsets","resourceVersion":"9048212"},"items":null} May 6 12:29:11.851: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-28xm5/pods","resourceVersion":"9048212"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 12:29:11.859: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-daemonsets-28xm5" for this suite. May 6 12:29:17.899: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 12:29:17.973: INFO: namespace: e2e-tests-daemonsets-28xm5, resource: bindings, ignored listing per whitelist May 6 12:29:17.975: INFO: namespace e2e-tests-daemonsets-28xm5 deletion completed in 6.112525423s • [SLOW TEST:42.150 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 12:29:17.975: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods STEP: Gathering metrics W0506 12:29:59.227845 7 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 6 12:29:59.227: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 12:29:59.228: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-mjrd9" for this suite. May 6 12:30:07.259: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 12:30:07.288: INFO: namespace: e2e-tests-gc-mjrd9, resource: bindings, ignored listing per whitelist May 6 12:30:07.317: INFO: namespace e2e-tests-gc-mjrd9 deletion completed in 8.086170869s • [SLOW TEST:49.342 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 12:30:07.317: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0644 on node default medium May 6 12:30:07.563: INFO: Waiting up to 5m0s for pod "pod-51e8d1ad-8f95-11ea-b5fe-0242ac110017" in namespace "e2e-tests-emptydir-h29lz" to be "success or failure" May 6 12:30:07.584: INFO: Pod "pod-51e8d1ad-8f95-11ea-b5fe-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 21.720887ms May 6 12:30:09.587: INFO: Pod "pod-51e8d1ad-8f95-11ea-b5fe-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024288046s May 6 12:30:11.646: INFO: Pod "pod-51e8d1ad-8f95-11ea-b5fe-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 4.083629639s May 6 12:30:13.649: INFO: Pod "pod-51e8d1ad-8f95-11ea-b5fe-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.086430069s STEP: Saw pod success May 6 12:30:13.649: INFO: Pod "pod-51e8d1ad-8f95-11ea-b5fe-0242ac110017" satisfied condition "success or failure" May 6 12:30:13.651: INFO: Trying to get logs from node hunter-worker pod pod-51e8d1ad-8f95-11ea-b5fe-0242ac110017 container test-container: STEP: delete the pod May 6 12:30:13.911: INFO: Waiting for pod pod-51e8d1ad-8f95-11ea-b5fe-0242ac110017 to disappear May 6 12:30:13.919: INFO: Pod pod-51e8d1ad-8f95-11ea-b5fe-0242ac110017 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 12:30:13.919: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-h29lz" for this suite. May 6 12:30:20.011: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 12:30:20.073: INFO: namespace: e2e-tests-emptydir-h29lz, resource: bindings, ignored listing per whitelist May 6 12:30:20.075: INFO: namespace e2e-tests-emptydir-h29lz deletion completed in 6.153114672s • [SLOW TEST:12.758 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 12:30:20.075: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-5970c9b4-8f95-11ea-b5fe-0242ac110017 STEP: Creating a pod to test consume secrets May 6 12:30:20.180: INFO: Waiting up to 5m0s for pod "pod-secrets-59717be2-8f95-11ea-b5fe-0242ac110017" in namespace "e2e-tests-secrets-5rl25" to be "success or failure" May 6 12:30:20.215: INFO: Pod "pod-secrets-59717be2-8f95-11ea-b5fe-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 34.56563ms May 6 12:30:22.329: INFO: Pod "pod-secrets-59717be2-8f95-11ea-b5fe-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.149072118s May 6 12:30:24.334: INFO: Pod "pod-secrets-59717be2-8f95-11ea-b5fe-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.153543903s STEP: Saw pod success May 6 12:30:24.334: INFO: Pod "pod-secrets-59717be2-8f95-11ea-b5fe-0242ac110017" satisfied condition "success or failure" May 6 12:30:24.337: INFO: Trying to get logs from node hunter-worker2 pod pod-secrets-59717be2-8f95-11ea-b5fe-0242ac110017 container secret-volume-test: STEP: delete the pod May 6 12:30:24.677: INFO: Waiting for pod pod-secrets-59717be2-8f95-11ea-b5fe-0242ac110017 to disappear May 6 12:30:24.682: INFO: Pod pod-secrets-59717be2-8f95-11ea-b5fe-0242ac110017 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 12:30:24.682: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-5rl25" for this suite. May 6 12:30:30.727: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 12:30:30.767: INFO: namespace: e2e-tests-secrets-5rl25, resource: bindings, ignored listing per whitelist May 6 12:30:30.814: INFO: namespace e2e-tests-secrets-5rl25 deletion completed in 6.128441236s • [SLOW TEST:10.738 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 12:30:30.814: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin May 6 12:30:30.959: INFO: Waiting up to 5m0s for pod "downwardapi-volume-5fdd6845-8f95-11ea-b5fe-0242ac110017" in namespace "e2e-tests-downward-api-p8xjb" to be "success or failure" May 6 12:30:30.964: INFO: Pod "downwardapi-volume-5fdd6845-8f95-11ea-b5fe-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 4.701448ms May 6 12:30:33.078: INFO: Pod "downwardapi-volume-5fdd6845-8f95-11ea-b5fe-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.118821048s May 6 12:30:35.082: INFO: Pod "downwardapi-volume-5fdd6845-8f95-11ea-b5fe-0242ac110017": Phase="Running", Reason="", readiness=true. Elapsed: 4.12233101s May 6 12:30:37.085: INFO: Pod "downwardapi-volume-5fdd6845-8f95-11ea-b5fe-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.126039663s STEP: Saw pod success May 6 12:30:37.085: INFO: Pod "downwardapi-volume-5fdd6845-8f95-11ea-b5fe-0242ac110017" satisfied condition "success or failure" May 6 12:30:37.088: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-5fdd6845-8f95-11ea-b5fe-0242ac110017 container client-container: STEP: delete the pod May 6 12:30:37.119: INFO: Waiting for pod downwardapi-volume-5fdd6845-8f95-11ea-b5fe-0242ac110017 to disappear May 6 12:30:37.149: INFO: Pod downwardapi-volume-5fdd6845-8f95-11ea-b5fe-0242ac110017 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 12:30:37.149: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-p8xjb" for this suite. May 6 12:30:43.162: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 12:30:43.196: INFO: namespace: e2e-tests-downward-api-p8xjb, resource: bindings, ignored listing per whitelist May 6 12:30:43.218: INFO: namespace e2e-tests-downward-api-p8xjb deletion completed in 6.066069943s • [SLOW TEST:12.404 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl logs should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 12:30:43.218: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1134 STEP: creating an rc May 6 12:30:43.326: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-k7cjr' May 6 12:30:47.095: INFO: stderr: "" May 6 12:30:47.095: INFO: stdout: "replicationcontroller/redis-master created\n" [It] should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Waiting for Redis master to start. May 6 12:30:48.099: INFO: Selector matched 1 pods for map[app:redis] May 6 12:30:48.099: INFO: Found 0 / 1 May 6 12:30:49.099: INFO: Selector matched 1 pods for map[app:redis] May 6 12:30:49.099: INFO: Found 0 / 1 May 6 12:30:50.100: INFO: Selector matched 1 pods for map[app:redis] May 6 12:30:50.100: INFO: Found 0 / 1 May 6 12:30:51.114: INFO: Selector matched 1 pods for map[app:redis] May 6 12:30:51.114: INFO: Found 1 / 1 May 6 12:30:51.114: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 May 6 12:30:51.126: INFO: Selector matched 1 pods for map[app:redis] May 6 12:30:51.126: INFO: ForEach: Found 1 pods from the filter. Now looping through them. STEP: checking for a matching strings May 6 12:30:51.126: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-tw7zq redis-master --namespace=e2e-tests-kubectl-k7cjr' May 6 12:30:51.224: INFO: stderr: "" May 6 12:30:51.224: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 06 May 12:30:50.208 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 06 May 12:30:50.208 # Server started, Redis version 3.2.12\n1:M 06 May 12:30:50.208 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 06 May 12:30:50.208 * The server is now ready to accept connections on port 6379\n" STEP: limiting log lines May 6 12:30:51.224: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-tw7zq redis-master --namespace=e2e-tests-kubectl-k7cjr --tail=1' May 6 12:30:51.332: INFO: stderr: "" May 6 12:30:51.332: INFO: stdout: "1:M 06 May 12:30:50.208 * The server is now ready to accept connections on port 6379\n" STEP: limiting log bytes May 6 12:30:51.332: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-tw7zq redis-master --namespace=e2e-tests-kubectl-k7cjr --limit-bytes=1' May 6 12:30:51.430: INFO: stderr: "" May 6 12:30:51.430: INFO: stdout: " " STEP: exposing timestamps May 6 12:30:51.430: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-tw7zq redis-master --namespace=e2e-tests-kubectl-k7cjr --tail=1 --timestamps' May 6 12:30:51.543: INFO: stderr: "" May 6 12:30:51.543: INFO: stdout: "2020-05-06T12:30:50.208711842Z 1:M 06 May 12:30:50.208 * The server is now ready to accept connections on port 6379\n" STEP: restricting to a time range May 6 12:30:54.043: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-tw7zq redis-master --namespace=e2e-tests-kubectl-k7cjr --since=1s' May 6 12:30:54.150: INFO: stderr: "" May 6 12:30:54.150: INFO: stdout: "" May 6 12:30:54.150: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-tw7zq redis-master --namespace=e2e-tests-kubectl-k7cjr --since=24h' May 6 12:30:54.257: INFO: stderr: "" May 6 12:30:54.257: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 06 May 12:30:50.208 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 06 May 12:30:50.208 # Server started, Redis version 3.2.12\n1:M 06 May 12:30:50.208 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 06 May 12:30:50.208 * The server is now ready to accept connections on port 6379\n" [AfterEach] [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1140 STEP: using delete to clean up resources May 6 12:30:54.257: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-k7cjr' May 6 12:30:54.357: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 6 12:30:54.357: INFO: stdout: "replicationcontroller \"redis-master\" force deleted\n" May 6 12:30:54.357: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=nginx --no-headers --namespace=e2e-tests-kubectl-k7cjr' May 6 12:30:55.141: INFO: stderr: "No resources found.\n" May 6 12:30:55.141: INFO: stdout: "" May 6 12:30:55.141: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=nginx --namespace=e2e-tests-kubectl-k7cjr -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 6 12:30:55.334: INFO: stderr: "" May 6 12:30:55.334: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 12:30:55.334: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-k7cjr" for this suite. May 6 12:31:17.351: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 12:31:17.433: INFO: namespace: e2e-tests-kubectl-k7cjr, resource: bindings, ignored listing per whitelist May 6 12:31:17.470: INFO: namespace e2e-tests-kubectl-k7cjr deletion completed in 22.131923709s • [SLOW TEST:34.252 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 12:31:17.470: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43 [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod May 6 12:31:17.551: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 12:31:24.155: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-init-container-5jglp" for this suite. May 6 12:31:30.171: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 12:31:30.228: INFO: namespace: e2e-tests-init-container-5jglp, resource: bindings, ignored listing per whitelist May 6 12:31:30.248: INFO: namespace e2e-tests-init-container-5jglp deletion completed in 6.086424241s • [SLOW TEST:12.778 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 12:31:30.249: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102 [It] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 6 12:31:30.379: INFO: Requires at least 2 nodes (not -1) [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68 May 6 12:31:30.388: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-jz746/daemonsets","resourceVersion":"9048835"},"items":null} May 6 12:31:30.390: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-jz746/pods","resourceVersion":"9048835"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 12:31:30.397: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-daemonsets-jz746" for this suite. May 6 12:31:36.411: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 12:31:36.438: INFO: namespace: e2e-tests-daemonsets-jz746, resource: bindings, ignored listing per whitelist May 6 12:31:36.486: INFO: namespace e2e-tests-daemonsets-jz746 deletion completed in 6.085711664s S [SKIPPING] [6.237 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should rollback without unnecessary restarts [Conformance] [It] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 6 12:31:30.379: Requires at least 2 nodes (not -1) /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:292 ------------------------------ S ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 12:31:36.486: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin May 6 12:31:36.584: INFO: Waiting up to 5m0s for pod "downwardapi-volume-86fb722d-8f95-11ea-b5fe-0242ac110017" in namespace "e2e-tests-projected-28bbx" to be "success or failure" May 6 12:31:36.588: INFO: Pod "downwardapi-volume-86fb722d-8f95-11ea-b5fe-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 3.561957ms May 6 12:31:38.592: INFO: Pod "downwardapi-volume-86fb722d-8f95-11ea-b5fe-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008167321s May 6 12:31:40.596: INFO: Pod "downwardapi-volume-86fb722d-8f95-11ea-b5fe-0242ac110017": Phase="Running", Reason="", readiness=true. Elapsed: 4.011770035s May 6 12:31:42.600: INFO: Pod "downwardapi-volume-86fb722d-8f95-11ea-b5fe-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.015631654s STEP: Saw pod success May 6 12:31:42.600: INFO: Pod "downwardapi-volume-86fb722d-8f95-11ea-b5fe-0242ac110017" satisfied condition "success or failure" May 6 12:31:42.602: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-86fb722d-8f95-11ea-b5fe-0242ac110017 container client-container: STEP: delete the pod May 6 12:31:42.643: INFO: Waiting for pod downwardapi-volume-86fb722d-8f95-11ea-b5fe-0242ac110017 to disappear May 6 12:31:42.690: INFO: Pod downwardapi-volume-86fb722d-8f95-11ea-b5fe-0242ac110017 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 12:31:42.690: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-28bbx" for this suite. May 6 12:31:48.710: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 12:31:48.751: INFO: namespace: e2e-tests-projected-28bbx, resource: bindings, ignored listing per whitelist May 6 12:31:48.779: INFO: namespace e2e-tests-projected-28bbx deletion completed in 6.084866514s • [SLOW TEST:12.293 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 12:31:48.779: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test override arguments May 6 12:31:48.888: INFO: Waiting up to 5m0s for pod "client-containers-8e509f54-8f95-11ea-b5fe-0242ac110017" in namespace "e2e-tests-containers-clbn5" to be "success or failure" May 6 12:31:48.892: INFO: Pod "client-containers-8e509f54-8f95-11ea-b5fe-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 4.439823ms May 6 12:31:50.895: INFO: Pod "client-containers-8e509f54-8f95-11ea-b5fe-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007398042s May 6 12:31:53.006: INFO: Pod "client-containers-8e509f54-8f95-11ea-b5fe-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.118541477s STEP: Saw pod success May 6 12:31:53.006: INFO: Pod "client-containers-8e509f54-8f95-11ea-b5fe-0242ac110017" satisfied condition "success or failure" May 6 12:31:53.009: INFO: Trying to get logs from node hunter-worker pod client-containers-8e509f54-8f95-11ea-b5fe-0242ac110017 container test-container: STEP: delete the pod May 6 12:31:53.038: INFO: Waiting for pod client-containers-8e509f54-8f95-11ea-b5fe-0242ac110017 to disappear May 6 12:31:53.052: INFO: Pod client-containers-8e509f54-8f95-11ea-b5fe-0242ac110017 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 12:31:53.052: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-containers-clbn5" for this suite. May 6 12:31:59.069: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 12:31:59.101: INFO: namespace: e2e-tests-containers-clbn5, resource: bindings, ignored listing per whitelist May 6 12:31:59.151: INFO: namespace e2e-tests-containers-clbn5 deletion completed in 6.096188718s • [SLOW TEST:10.372 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 12:31:59.151: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-map-94803631-8f95-11ea-b5fe-0242ac110017 STEP: Creating a pod to test consume secrets May 6 12:31:59.286: INFO: Waiting up to 5m0s for pod "pod-secrets-94826bec-8f95-11ea-b5fe-0242ac110017" in namespace "e2e-tests-secrets-ll66j" to be "success or failure" May 6 12:31:59.288: INFO: Pod "pod-secrets-94826bec-8f95-11ea-b5fe-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.712831ms May 6 12:32:01.292: INFO: Pod "pod-secrets-94826bec-8f95-11ea-b5fe-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006148029s May 6 12:32:03.312: INFO: Pod "pod-secrets-94826bec-8f95-11ea-b5fe-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.025988086s STEP: Saw pod success May 6 12:32:03.312: INFO: Pod "pod-secrets-94826bec-8f95-11ea-b5fe-0242ac110017" satisfied condition "success or failure" May 6 12:32:03.314: INFO: Trying to get logs from node hunter-worker2 pod pod-secrets-94826bec-8f95-11ea-b5fe-0242ac110017 container secret-volume-test: STEP: delete the pod May 6 12:32:03.344: INFO: Waiting for pod pod-secrets-94826bec-8f95-11ea-b5fe-0242ac110017 to disappear May 6 12:32:03.354: INFO: Pod pod-secrets-94826bec-8f95-11ea-b5fe-0242ac110017 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 12:32:03.354: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-ll66j" for this suite. May 6 12:32:09.370: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 12:32:09.390: INFO: namespace: e2e-tests-secrets-ll66j, resource: bindings, ignored listing per whitelist May 6 12:32:09.447: INFO: namespace e2e-tests-secrets-ll66j deletion completed in 6.09035902s • [SLOW TEST:10.296 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 12:32:09.448: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod STEP: setting up watch STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: verifying pod creation was observed May 6 12:32:13.623: INFO: running pod: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-submit-remove-9aa28ce9-8f95-11ea-b5fe-0242ac110017", GenerateName:"", Namespace:"e2e-tests-pods-59tvr", SelfLink:"/api/v1/namespaces/e2e-tests-pods-59tvr/pods/pod-submit-remove-9aa28ce9-8f95-11ea-b5fe-0242ac110017", UID:"9aa7f079-8f95-11ea-99e8-0242ac110002", ResourceVersion:"9049004", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63724365129, loc:(*time.Location)(0x7950ac0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"546480473"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-f82jd", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc0024cd780), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"nginx", Image:"docker.io/library/nginx:1.14-alpine", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-f82jd", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc002563c68), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"hunter-worker", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc002684ae0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc002563cb0)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc002563cd0)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc002563cd8), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc002563cdc)}, Status:v1.PodStatus{Phase:"Running", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724365129, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"Ready", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724365133, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"ContainersReady", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724365133, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724365129, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.17.0.3", PodIP:"10.244.1.21", StartTime:(*v1.Time)(0xc0016da320), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"nginx", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(0xc0016da360), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:true, RestartCount:0, Image:"docker.io/library/nginx:1.14-alpine", ImageID:"docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7", ContainerID:"containerd://293e99157209d7419df3f5c7a810854e4797346d4b19ca4b5dd94266a0b91dd5"}}, QOSClass:"BestEffort"}} STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice STEP: verifying pod deletion was observed [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 12:32:21.312: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-59tvr" for this suite. May 6 12:32:27.328: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 12:32:27.373: INFO: namespace: e2e-tests-pods-59tvr, resource: bindings, ignored listing per whitelist May 6 12:32:27.399: INFO: namespace e2e-tests-pods-59tvr deletion completed in 6.082553357s • [SLOW TEST:17.951 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 12:32:27.399: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with secret pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod pod-subpath-test-secret-8gkk STEP: Creating a pod to test atomic-volume-subpath May 6 12:32:27.524: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-8gkk" in namespace "e2e-tests-subpath-lnflc" to be "success or failure" May 6 12:32:27.528: INFO: Pod "pod-subpath-test-secret-8gkk": Phase="Pending", Reason="", readiness=false. Elapsed: 4.057459ms May 6 12:32:29.533: INFO: Pod "pod-subpath-test-secret-8gkk": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00882257s May 6 12:32:31.537: INFO: Pod "pod-subpath-test-secret-8gkk": Phase="Pending", Reason="", readiness=false. Elapsed: 4.012735597s May 6 12:32:33.540: INFO: Pod "pod-subpath-test-secret-8gkk": Phase="Pending", Reason="", readiness=false. Elapsed: 6.016146156s May 6 12:32:35.544: INFO: Pod "pod-subpath-test-secret-8gkk": Phase="Running", Reason="", readiness=false. Elapsed: 8.019798688s May 6 12:32:37.547: INFO: Pod "pod-subpath-test-secret-8gkk": Phase="Running", Reason="", readiness=false. Elapsed: 10.023272947s May 6 12:32:39.551: INFO: Pod "pod-subpath-test-secret-8gkk": Phase="Running", Reason="", readiness=false. Elapsed: 12.027046347s May 6 12:32:41.554: INFO: Pod "pod-subpath-test-secret-8gkk": Phase="Running", Reason="", readiness=false. Elapsed: 14.029856004s May 6 12:32:43.558: INFO: Pod "pod-subpath-test-secret-8gkk": Phase="Running", Reason="", readiness=false. Elapsed: 16.033702756s May 6 12:32:45.562: INFO: Pod "pod-subpath-test-secret-8gkk": Phase="Running", Reason="", readiness=false. Elapsed: 18.037814854s May 6 12:32:47.565: INFO: Pod "pod-subpath-test-secret-8gkk": Phase="Running", Reason="", readiness=false. Elapsed: 20.041340802s May 6 12:32:49.569: INFO: Pod "pod-subpath-test-secret-8gkk": Phase="Running", Reason="", readiness=false. Elapsed: 22.045225055s May 6 12:32:51.573: INFO: Pod "pod-subpath-test-secret-8gkk": Phase="Running", Reason="", readiness=false. Elapsed: 24.049265224s May 6 12:32:53.578: INFO: Pod "pod-subpath-test-secret-8gkk": Phase="Running", Reason="", readiness=false. Elapsed: 26.053828733s May 6 12:32:55.582: INFO: Pod "pod-subpath-test-secret-8gkk": Phase="Succeeded", Reason="", readiness=false. Elapsed: 28.057550897s STEP: Saw pod success May 6 12:32:55.582: INFO: Pod "pod-subpath-test-secret-8gkk" satisfied condition "success or failure" May 6 12:32:55.584: INFO: Trying to get logs from node hunter-worker pod pod-subpath-test-secret-8gkk container test-container-subpath-secret-8gkk: STEP: delete the pod May 6 12:32:55.662: INFO: Waiting for pod pod-subpath-test-secret-8gkk to disappear May 6 12:32:55.672: INFO: Pod pod-subpath-test-secret-8gkk no longer exists STEP: Deleting pod pod-subpath-test-secret-8gkk May 6 12:32:55.672: INFO: Deleting pod "pod-subpath-test-secret-8gkk" in namespace "e2e-tests-subpath-lnflc" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 12:32:55.675: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-subpath-lnflc" for this suite. May 6 12:33:01.717: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 12:33:01.744: INFO: namespace: e2e-tests-subpath-lnflc, resource: bindings, ignored listing per whitelist May 6 12:33:01.776: INFO: namespace e2e-tests-subpath-lnflc deletion completed in 6.098013397s • [SLOW TEST:34.377 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with secret pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 12:33:01.776: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-map-b9cea937-8f95-11ea-b5fe-0242ac110017 STEP: Creating a pod to test consume configMaps May 6 12:33:01.860: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-b9cfc78b-8f95-11ea-b5fe-0242ac110017" in namespace "e2e-tests-projected-f5gqf" to be "success or failure" May 6 12:33:01.891: INFO: Pod "pod-projected-configmaps-b9cfc78b-8f95-11ea-b5fe-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 31.420109ms May 6 12:33:04.122: INFO: Pod "pod-projected-configmaps-b9cfc78b-8f95-11ea-b5fe-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.262810382s May 6 12:33:06.138: INFO: Pod "pod-projected-configmaps-b9cfc78b-8f95-11ea-b5fe-0242ac110017": Phase="Running", Reason="", readiness=true. Elapsed: 4.277901871s May 6 12:33:08.146: INFO: Pod "pod-projected-configmaps-b9cfc78b-8f95-11ea-b5fe-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.286758329s STEP: Saw pod success May 6 12:33:08.147: INFO: Pod "pod-projected-configmaps-b9cfc78b-8f95-11ea-b5fe-0242ac110017" satisfied condition "success or failure" May 6 12:33:08.149: INFO: Trying to get logs from node hunter-worker pod pod-projected-configmaps-b9cfc78b-8f95-11ea-b5fe-0242ac110017 container projected-configmap-volume-test: STEP: delete the pod May 6 12:33:08.197: INFO: Waiting for pod pod-projected-configmaps-b9cfc78b-8f95-11ea-b5fe-0242ac110017 to disappear May 6 12:33:08.206: INFO: Pod pod-projected-configmaps-b9cfc78b-8f95-11ea-b5fe-0242ac110017 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 12:33:08.206: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-f5gqf" for this suite. May 6 12:33:14.220: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 12:33:14.272: INFO: namespace: e2e-tests-projected-f5gqf, resource: bindings, ignored listing per whitelist May 6 12:33:14.305: INFO: namespace e2e-tests-projected-f5gqf deletion completed in 6.096866906s • [SLOW TEST:12.529 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 6 12:33:14.306: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpa': should get the expected 'State' STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpof': should get the expected 'State' STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpn': should get the expected 'State' STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance] [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 6 12:33:50.812: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-runtime-l4hvl" for this suite. May 6 12:33:56.826: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 6 12:33:56.875: INFO: namespace: e2e-tests-container-runtime-l4hvl, resource: bindings, ignored listing per whitelist May 6 12:33:56.930: INFO: namespace e2e-tests-container-runtime-l4hvl deletion completed in 6.114216639s • [SLOW TEST:42.624 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:37 when starting a container that exits /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSMay 6 12:33:56.930: INFO: Running AfterSuite actions on all nodes May 6 12:33:56.930: INFO: Running AfterSuite actions on node 1 May 6 12:33:56.930: INFO: Skipping dumping logs from cluster Ran 200 of 2164 Specs in 6429.793 seconds SUCCESS! -- 200 Passed | 0 Failed | 0 Pending | 1964 Skipped PASS