I0514 10:46:54.546374 6 e2e.go:224] Starting e2e run "399b812e-95d0-11ea-9b22-0242ac110018" on Ginkgo node 1 Running Suite: Kubernetes e2e suite =================================== Random Seed: 1589453213 - Will randomize all specs Will run 201 of 2164 specs May 14 10:46:54.738: INFO: >>> kubeConfig: /root/.kube/config May 14 10:46:54.741: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable May 14 10:46:54.757: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready May 14 10:46:54.790: INFO: 12 / 12 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) May 14 10:46:54.790: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. May 14 10:46:54.790: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start May 14 10:46:54.800: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) May 14 10:46:54.800: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) May 14 10:46:54.800: INFO: e2e test version: v1.13.12 May 14 10:46:54.802: INFO: kube-apiserver version: v1.13.12 S ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 14 10:46:54.802: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir May 14 10:46:55.184: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0666 on tmpfs May 14 10:46:55.192: INFO: Waiting up to 5m0s for pod "pod-3a49af02-95d0-11ea-9b22-0242ac110018" in namespace "e2e-tests-emptydir-xhhnp" to be "success or failure" May 14 10:46:55.256: INFO: Pod "pod-3a49af02-95d0-11ea-9b22-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 64.843692ms May 14 10:46:57.260: INFO: Pod "pod-3a49af02-95d0-11ea-9b22-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.067911821s May 14 10:46:59.333: INFO: Pod "pod-3a49af02-95d0-11ea-9b22-0242ac110018": Phase="Running", Reason="", readiness=true. Elapsed: 4.141605588s May 14 10:47:01.337: INFO: Pod "pod-3a49af02-95d0-11ea-9b22-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.145713146s STEP: Saw pod success May 14 10:47:01.337: INFO: Pod "pod-3a49af02-95d0-11ea-9b22-0242ac110018" satisfied condition "success or failure" May 14 10:47:01.341: INFO: Trying to get logs from node hunter-worker pod pod-3a49af02-95d0-11ea-9b22-0242ac110018 container test-container: STEP: delete the pod May 14 10:47:01.407: INFO: Waiting for pod pod-3a49af02-95d0-11ea-9b22-0242ac110018 to disappear May 14 10:47:01.417: INFO: Pod pod-3a49af02-95d0-11ea-9b22-0242ac110018 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 14 10:47:01.417: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-xhhnp" for this suite. May 14 10:47:07.432: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 10:47:07.497: INFO: namespace: e2e-tests-emptydir-xhhnp, resource: bindings, ignored listing per whitelist May 14 10:47:07.504: INFO: namespace e2e-tests-emptydir-xhhnp deletion completed in 6.083309408s • [SLOW TEST:12.702 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 14 10:47:07.504: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0777 on tmpfs May 14 10:47:07.635: INFO: Waiting up to 5m0s for pod "pod-41b2934d-95d0-11ea-9b22-0242ac110018" in namespace "e2e-tests-emptydir-q4b7h" to be "success or failure" May 14 10:47:07.639: INFO: Pod "pod-41b2934d-95d0-11ea-9b22-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 4.14488ms May 14 10:47:09.759: INFO: Pod "pod-41b2934d-95d0-11ea-9b22-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.123859014s May 14 10:47:11.766: INFO: Pod "pod-41b2934d-95d0-11ea-9b22-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.131043942s STEP: Saw pod success May 14 10:47:11.766: INFO: Pod "pod-41b2934d-95d0-11ea-9b22-0242ac110018" satisfied condition "success or failure" May 14 10:47:11.768: INFO: Trying to get logs from node hunter-worker pod pod-41b2934d-95d0-11ea-9b22-0242ac110018 container test-container: STEP: delete the pod May 14 10:47:11.781: INFO: Waiting for pod pod-41b2934d-95d0-11ea-9b22-0242ac110018 to disappear May 14 10:47:11.802: INFO: Pod pod-41b2934d-95d0-11ea-9b22-0242ac110018 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 14 10:47:11.802: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-q4b7h" for this suite. May 14 10:47:17.820: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 10:47:17.861: INFO: namespace: e2e-tests-emptydir-q4b7h, resource: bindings, ignored listing per whitelist May 14 10:47:17.891: INFO: namespace e2e-tests-emptydir-q4b7h deletion completed in 6.087089796s • [SLOW TEST:10.387 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 14 10:47:17.891: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-47dfeaf9-95d0-11ea-9b22-0242ac110018 STEP: Creating a pod to test consume configMaps May 14 10:47:18.041: INFO: Waiting up to 5m0s for pod "pod-configmaps-47e618d9-95d0-11ea-9b22-0242ac110018" in namespace "e2e-tests-configmap-jtckg" to be "success or failure" May 14 10:47:18.045: INFO: Pod "pod-configmaps-47e618d9-95d0-11ea-9b22-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 3.239048ms May 14 10:47:20.418: INFO: Pod "pod-configmaps-47e618d9-95d0-11ea-9b22-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.376377304s May 14 10:47:22.477: INFO: Pod "pod-configmaps-47e618d9-95d0-11ea-9b22-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.43501934s STEP: Saw pod success May 14 10:47:22.477: INFO: Pod "pod-configmaps-47e618d9-95d0-11ea-9b22-0242ac110018" satisfied condition "success or failure" May 14 10:47:22.548: INFO: Trying to get logs from node hunter-worker2 pod pod-configmaps-47e618d9-95d0-11ea-9b22-0242ac110018 container configmap-volume-test: STEP: delete the pod May 14 10:47:22.645: INFO: Waiting for pod pod-configmaps-47e618d9-95d0-11ea-9b22-0242ac110018 to disappear May 14 10:47:22.715: INFO: Pod pod-configmaps-47e618d9-95d0-11ea-9b22-0242ac110018 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 14 10:47:22.715: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-jtckg" for this suite. May 14 10:47:28.787: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 10:47:28.844: INFO: namespace: e2e-tests-configmap-jtckg, resource: bindings, ignored listing per whitelist May 14 10:47:28.857: INFO: namespace e2e-tests-configmap-jtckg deletion completed in 6.140056444s • [SLOW TEST:10.966 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run job should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 14 10:47:28.857: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1454 [It] should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine May 14 10:47:28.935: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=e2e-tests-kubectl-bcbwz' May 14 10:47:31.121: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" May 14 10:47:31.121: INFO: stdout: "job.batch/e2e-test-nginx-job created\n" STEP: verifying the job e2e-test-nginx-job was created [AfterEach] [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1459 May 14 10:47:31.184: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete jobs e2e-test-nginx-job --namespace=e2e-tests-kubectl-bcbwz' May 14 10:47:31.308: INFO: stderr: "" May 14 10:47:31.308: INFO: stdout: "job.batch \"e2e-test-nginx-job\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 14 10:47:31.308: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-bcbwz" for this suite. May 14 10:47:53.350: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 10:47:53.384: INFO: namespace: e2e-tests-kubectl-bcbwz, resource: bindings, ignored listing per whitelist May 14 10:47:53.450: INFO: namespace e2e-tests-kubectl-bcbwz deletion completed in 22.138950655s • [SLOW TEST:24.593 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run --rm job should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 14 10:47:53.450: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: executing a command with run --rm and attach with stdin May 14 10:47:53.605: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=e2e-tests-kubectl-hz65w run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed'' May 14 10:47:58.204: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\nI0514 10:47:58.133695 86 log.go:172] (0xc0006c4370) (0xc00041d9a0) Create stream\nI0514 10:47:58.133767 86 log.go:172] (0xc0006c4370) (0xc00041d9a0) Stream added, broadcasting: 1\nI0514 10:47:58.136381 86 log.go:172] (0xc0006c4370) Reply frame received for 1\nI0514 10:47:58.136461 86 log.go:172] (0xc0006c4370) (0xc0006ee000) Create stream\nI0514 10:47:58.136508 86 log.go:172] (0xc0006c4370) (0xc0006ee000) Stream added, broadcasting: 3\nI0514 10:47:58.137917 86 log.go:172] (0xc0006c4370) Reply frame received for 3\nI0514 10:47:58.137976 86 log.go:172] (0xc0006c4370) (0xc00041da40) Create stream\nI0514 10:47:58.137988 86 log.go:172] (0xc0006c4370) (0xc00041da40) Stream added, broadcasting: 5\nI0514 10:47:58.139333 86 log.go:172] (0xc0006c4370) Reply frame received for 5\nI0514 10:47:58.139403 86 log.go:172] (0xc0006c4370) (0xc0008aa000) Create stream\nI0514 10:47:58.139429 86 log.go:172] (0xc0006c4370) (0xc0008aa000) Stream added, broadcasting: 7\nI0514 10:47:58.140489 86 log.go:172] (0xc0006c4370) Reply frame received for 7\nI0514 10:47:58.140650 86 log.go:172] (0xc0006ee000) (3) Writing data frame\nI0514 10:47:58.140758 86 log.go:172] (0xc0006ee000) (3) Writing data frame\nI0514 10:47:58.141896 86 log.go:172] (0xc0006c4370) Data frame received for 5\nI0514 10:47:58.141913 86 log.go:172] (0xc00041da40) (5) Data frame handling\nI0514 10:47:58.141924 86 log.go:172] (0xc00041da40) (5) Data frame sent\nI0514 10:47:58.142512 86 log.go:172] (0xc0006c4370) Data frame received for 5\nI0514 10:47:58.142531 86 log.go:172] (0xc00041da40) (5) Data frame handling\nI0514 10:47:58.142552 86 log.go:172] (0xc00041da40) (5) Data frame sent\nI0514 10:47:58.182076 86 log.go:172] (0xc0006c4370) Data frame received for 7\nI0514 10:47:58.182137 86 log.go:172] (0xc0008aa000) (7) Data frame handling\nI0514 10:47:58.182163 86 log.go:172] (0xc0006c4370) Data frame received for 5\nI0514 10:47:58.182174 86 log.go:172] (0xc00041da40) (5) Data frame handling\nI0514 10:47:58.182341 86 log.go:172] (0xc0006c4370) (0xc0006ee000) Stream removed, broadcasting: 3\nI0514 10:47:58.182371 86 log.go:172] (0xc0006c4370) Data frame received for 1\nI0514 10:47:58.182388 86 log.go:172] (0xc00041d9a0) (1) Data frame handling\nI0514 10:47:58.182400 86 log.go:172] (0xc00041d9a0) (1) Data frame sent\nI0514 10:47:58.182411 86 log.go:172] (0xc0006c4370) (0xc00041d9a0) Stream removed, broadcasting: 1\nI0514 10:47:58.182437 86 log.go:172] (0xc0006c4370) Go away received\nI0514 10:47:58.182514 86 log.go:172] (0xc0006c4370) (0xc00041d9a0) Stream removed, broadcasting: 1\nI0514 10:47:58.182535 86 log.go:172] (0xc0006c4370) (0xc0006ee000) Stream removed, broadcasting: 3\nI0514 10:47:58.182546 86 log.go:172] (0xc0006c4370) (0xc00041da40) Stream removed, broadcasting: 5\nI0514 10:47:58.182558 86 log.go:172] (0xc0006c4370) (0xc0008aa000) Stream removed, broadcasting: 7\n" May 14 10:47:58.204: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n" STEP: verifying the job e2e-test-rm-busybox-job was deleted [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 14 10:48:00.275: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-hz65w" for this suite. May 14 10:48:06.297: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 10:48:06.306: INFO: namespace: e2e-tests-kubectl-hz65w, resource: bindings, ignored listing per whitelist May 14 10:48:06.367: INFO: namespace e2e-tests-kubectl-hz65w deletion completed in 6.08827986s • [SLOW TEST:12.917 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run --rm job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 14 10:48:06.368: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0644 on node default medium May 14 10:48:06.493: INFO: Waiting up to 5m0s for pod "pod-64c818ec-95d0-11ea-9b22-0242ac110018" in namespace "e2e-tests-emptydir-zvs5w" to be "success or failure" May 14 10:48:06.514: INFO: Pod "pod-64c818ec-95d0-11ea-9b22-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 20.878504ms May 14 10:48:08.519: INFO: Pod "pod-64c818ec-95d0-11ea-9b22-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025425887s May 14 10:48:10.523: INFO: Pod "pod-64c818ec-95d0-11ea-9b22-0242ac110018": Phase="Running", Reason="", readiness=true. Elapsed: 4.029573891s May 14 10:48:12.527: INFO: Pod "pod-64c818ec-95d0-11ea-9b22-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.033962168s STEP: Saw pod success May 14 10:48:12.527: INFO: Pod "pod-64c818ec-95d0-11ea-9b22-0242ac110018" satisfied condition "success or failure" May 14 10:48:12.532: INFO: Trying to get logs from node hunter-worker2 pod pod-64c818ec-95d0-11ea-9b22-0242ac110018 container test-container: STEP: delete the pod May 14 10:48:12.839: INFO: Waiting for pod pod-64c818ec-95d0-11ea-9b22-0242ac110018 to disappear May 14 10:48:12.893: INFO: Pod pod-64c818ec-95d0-11ea-9b22-0242ac110018 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 14 10:48:12.893: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-zvs5w" for this suite. May 14 10:48:18.907: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 10:48:18.918: INFO: namespace: e2e-tests-emptydir-zvs5w, resource: bindings, ignored listing per whitelist May 14 10:48:19.001: INFO: namespace e2e-tests-emptydir-zvs5w deletion completed in 6.104638668s • [SLOW TEST:12.634 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 14 10:48:19.001: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-6c4c65f6-95d0-11ea-9b22-0242ac110018 STEP: Creating a pod to test consume configMaps May 14 10:48:19.158: INFO: Waiting up to 5m0s for pod "pod-configmaps-6c51a8f4-95d0-11ea-9b22-0242ac110018" in namespace "e2e-tests-configmap-6p5qk" to be "success or failure" May 14 10:48:19.161: INFO: Pod "pod-configmaps-6c51a8f4-95d0-11ea-9b22-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.618304ms May 14 10:48:21.227: INFO: Pod "pod-configmaps-6c51a8f4-95d0-11ea-9b22-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.069452873s May 14 10:48:23.245: INFO: Pod "pod-configmaps-6c51a8f4-95d0-11ea-9b22-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.087199357s STEP: Saw pod success May 14 10:48:23.245: INFO: Pod "pod-configmaps-6c51a8f4-95d0-11ea-9b22-0242ac110018" satisfied condition "success or failure" May 14 10:48:23.248: INFO: Trying to get logs from node hunter-worker2 pod pod-configmaps-6c51a8f4-95d0-11ea-9b22-0242ac110018 container configmap-volume-test: STEP: delete the pod May 14 10:48:23.276: INFO: Waiting for pod pod-configmaps-6c51a8f4-95d0-11ea-9b22-0242ac110018 to disappear May 14 10:48:23.298: INFO: Pod pod-configmaps-6c51a8f4-95d0-11ea-9b22-0242ac110018 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 14 10:48:23.299: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-6p5qk" for this suite. May 14 10:48:29.344: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 10:48:29.416: INFO: namespace: e2e-tests-configmap-6p5qk, resource: bindings, ignored listing per whitelist May 14 10:48:29.420: INFO: namespace e2e-tests-configmap-6p5qk deletion completed in 6.116899313s • [SLOW TEST:10.418 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 14 10:48:29.420: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test override arguments May 14 10:48:29.547: INFO: Waiting up to 5m0s for pod "client-containers-72820e5d-95d0-11ea-9b22-0242ac110018" in namespace "e2e-tests-containers-vqc54" to be "success or failure" May 14 10:48:29.575: INFO: Pod "client-containers-72820e5d-95d0-11ea-9b22-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 28.030694ms May 14 10:48:31.580: INFO: Pod "client-containers-72820e5d-95d0-11ea-9b22-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032554333s May 14 10:48:33.584: INFO: Pod "client-containers-72820e5d-95d0-11ea-9b22-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.03690805s STEP: Saw pod success May 14 10:48:33.584: INFO: Pod "client-containers-72820e5d-95d0-11ea-9b22-0242ac110018" satisfied condition "success or failure" May 14 10:48:33.588: INFO: Trying to get logs from node hunter-worker2 pod client-containers-72820e5d-95d0-11ea-9b22-0242ac110018 container test-container: STEP: delete the pod May 14 10:48:33.625: INFO: Waiting for pod client-containers-72820e5d-95d0-11ea-9b22-0242ac110018 to disappear May 14 10:48:33.629: INFO: Pod client-containers-72820e5d-95d0-11ea-9b22-0242ac110018 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 14 10:48:33.629: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-containers-vqc54" for this suite. May 14 10:48:39.644: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 10:48:39.687: INFO: namespace: e2e-tests-containers-vqc54, resource: bindings, ignored listing per whitelist May 14 10:48:39.700: INFO: namespace e2e-tests-containers-vqc54 deletion completed in 6.067120502s • [SLOW TEST:10.280 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 14 10:48:39.700: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a watch on configmaps STEP: creating a new configmap STEP: modifying the configmap once STEP: closing the watch once it receives two notifications May 14 10:48:39.962: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-bwvn9,SelfLink:/api/v1/namespaces/e2e-tests-watch-bwvn9/configmaps/e2e-watch-test-watch-closed,UID:78b2afc0-95d0-11ea-99e8-0242ac110002,ResourceVersion:10513742,Generation:0,CreationTimestamp:2020-05-14 10:48:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} May 14 10:48:39.963: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-bwvn9,SelfLink:/api/v1/namespaces/e2e-tests-watch-bwvn9/configmaps/e2e-watch-test-watch-closed,UID:78b2afc0-95d0-11ea-99e8-0242ac110002,ResourceVersion:10513743,Generation:0,CreationTimestamp:2020-05-14 10:48:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time, while the watch is closed STEP: creating a new watch on configmaps from the last resource version observed by the first watch STEP: deleting the configmap STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed May 14 10:48:39.972: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-bwvn9,SelfLink:/api/v1/namespaces/e2e-tests-watch-bwvn9/configmaps/e2e-watch-test-watch-closed,UID:78b2afc0-95d0-11ea-99e8-0242ac110002,ResourceVersion:10513744,Generation:0,CreationTimestamp:2020-05-14 10:48:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} May 14 10:48:39.972: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-bwvn9,SelfLink:/api/v1/namespaces/e2e-tests-watch-bwvn9/configmaps/e2e-watch-test-watch-closed,UID:78b2afc0-95d0-11ea-99e8-0242ac110002,ResourceVersion:10513745,Generation:0,CreationTimestamp:2020-05-14 10:48:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 14 10:48:39.972: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-watch-bwvn9" for this suite. May 14 10:48:45.988: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 10:48:46.056: INFO: namespace: e2e-tests-watch-bwvn9, resource: bindings, ignored listing per whitelist May 14 10:48:46.070: INFO: namespace e2e-tests-watch-bwvn9 deletion completed in 6.092981907s • [SLOW TEST:6.370 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 14 10:48:46.070: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:85 [It] should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating service multi-endpoint-test in namespace e2e-tests-services-bbhrj STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-bbhrj to expose endpoints map[] May 14 10:48:46.233: INFO: Get endpoints failed (11.798031ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found May 14 10:48:47.237: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-bbhrj exposes endpoints map[] (1.01490556s elapsed) STEP: Creating pod pod1 in namespace e2e-tests-services-bbhrj STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-bbhrj to expose endpoints map[pod1:[100]] May 14 10:48:51.284: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-bbhrj exposes endpoints map[pod1:[100]] (4.041333811s elapsed) STEP: Creating pod pod2 in namespace e2e-tests-services-bbhrj STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-bbhrj to expose endpoints map[pod1:[100] pod2:[101]] May 14 10:48:55.622: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-bbhrj exposes endpoints map[pod1:[100] pod2:[101]] (4.334760706s elapsed) STEP: Deleting pod pod1 in namespace e2e-tests-services-bbhrj STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-bbhrj to expose endpoints map[pod2:[101]] May 14 10:48:56.650: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-bbhrj exposes endpoints map[pod2:[101]] (1.023736681s elapsed) STEP: Deleting pod pod2 in namespace e2e-tests-services-bbhrj STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-bbhrj to expose endpoints map[] May 14 10:48:57.691: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-bbhrj exposes endpoints map[] (1.035640461s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 14 10:48:57.710: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-services-bbhrj" for this suite. May 14 10:49:03.736: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 10:49:03.771: INFO: namespace: e2e-tests-services-bbhrj, resource: bindings, ignored listing per whitelist May 14 10:49:03.798: INFO: namespace e2e-tests-services-bbhrj deletion completed in 6.083865322s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:90 • [SLOW TEST:17.728 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 14 10:49:03.799: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin May 14 10:49:03.894: INFO: Waiting up to 5m0s for pod "downwardapi-volume-86fe729d-95d0-11ea-9b22-0242ac110018" in namespace "e2e-tests-projected-r5w7d" to be "success or failure" May 14 10:49:03.898: INFO: Pod "downwardapi-volume-86fe729d-95d0-11ea-9b22-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 3.615896ms May 14 10:49:05.902: INFO: Pod "downwardapi-volume-86fe729d-95d0-11ea-9b22-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008083743s May 14 10:49:07.907: INFO: Pod "downwardapi-volume-86fe729d-95d0-11ea-9b22-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012952339s STEP: Saw pod success May 14 10:49:07.907: INFO: Pod "downwardapi-volume-86fe729d-95d0-11ea-9b22-0242ac110018" satisfied condition "success or failure" May 14 10:49:07.909: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-86fe729d-95d0-11ea-9b22-0242ac110018 container client-container: STEP: delete the pod May 14 10:49:07.996: INFO: Waiting for pod downwardapi-volume-86fe729d-95d0-11ea-9b22-0242ac110018 to disappear May 14 10:49:08.006: INFO: Pod downwardapi-volume-86fe729d-95d0-11ea-9b22-0242ac110018 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 14 10:49:08.006: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-r5w7d" for this suite. May 14 10:49:14.070: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 10:49:14.087: INFO: namespace: e2e-tests-projected-r5w7d, resource: bindings, ignored listing per whitelist May 14 10:49:14.172: INFO: namespace e2e-tests-projected-r5w7d deletion completed in 6.162619116s • [SLOW TEST:10.374 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 14 10:49:14.173: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook May 14 10:49:22.414: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 14 10:49:22.449: INFO: Pod pod-with-poststart-http-hook still exists May 14 10:49:24.449: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 14 10:49:24.454: INFO: Pod pod-with-poststart-http-hook still exists May 14 10:49:26.449: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 14 10:49:26.453: INFO: Pod pod-with-poststart-http-hook still exists May 14 10:49:28.449: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 14 10:49:28.453: INFO: Pod pod-with-poststart-http-hook still exists May 14 10:49:30.449: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 14 10:49:30.453: INFO: Pod pod-with-poststart-http-hook still exists May 14 10:49:32.449: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 14 10:49:32.454: INFO: Pod pod-with-poststart-http-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 14 10:49:32.454: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-92bgh" for this suite. May 14 10:49:54.472: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 10:49:54.520: INFO: namespace: e2e-tests-container-lifecycle-hook-92bgh, resource: bindings, ignored listing per whitelist May 14 10:49:54.534: INFO: namespace e2e-tests-container-lifecycle-hook-92bgh deletion completed in 22.076557849s • [SLOW TEST:40.361 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40 should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 14 10:49:54.534: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 14 10:49:54.611: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 14 10:49:55.743: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-custom-resource-definition-mb9cq" for this suite. May 14 10:50:01.785: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 10:50:01.851: INFO: namespace: e2e-tests-custom-resource-definition-mb9cq, resource: bindings, ignored listing per whitelist May 14 10:50:01.867: INFO: namespace e2e-tests-custom-resource-definition-mb9cq deletion completed in 6.120354784s • [SLOW TEST:7.333 seconds] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 Simple CustomResourceDefinition /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:35 creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 14 10:50:01.867: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74 STEP: Creating service test in namespace e2e-tests-statefulset-qb524 [It] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a new StatefulSet May 14 10:50:02.004: INFO: Found 0 stateful pods, waiting for 3 May 14 10:50:12.008: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 14 10:50:12.008: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 14 10:50:12.008: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false May 14 10:50:22.008: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 14 10:50:22.008: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 14 10:50:22.008: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true May 14 10:50:22.015: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-qb524 ss2-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' May 14 10:50:22.269: INFO: stderr: "I0514 10:50:22.136540 113 log.go:172] (0xc0007c4c60) (0xc000883900) Create stream\nI0514 10:50:22.136587 113 log.go:172] (0xc0007c4c60) (0xc000883900) Stream added, broadcasting: 1\nI0514 10:50:22.140381 113 log.go:172] (0xc0007c4c60) Reply frame received for 1\nI0514 10:50:22.140438 113 log.go:172] (0xc0007c4c60) (0xc0007eab40) Create stream\nI0514 10:50:22.140459 113 log.go:172] (0xc0007c4c60) (0xc0007eab40) Stream added, broadcasting: 3\nI0514 10:50:22.141272 113 log.go:172] (0xc0007c4c60) Reply frame received for 3\nI0514 10:50:22.141310 113 log.go:172] (0xc0007c4c60) (0xc0007eabe0) Create stream\nI0514 10:50:22.141319 113 log.go:172] (0xc0007c4c60) (0xc0007eabe0) Stream added, broadcasting: 5\nI0514 10:50:22.142015 113 log.go:172] (0xc0007c4c60) Reply frame received for 5\nI0514 10:50:22.260524 113 log.go:172] (0xc0007c4c60) Data frame received for 3\nI0514 10:50:22.260565 113 log.go:172] (0xc0007eab40) (3) Data frame handling\nI0514 10:50:22.260577 113 log.go:172] (0xc0007eab40) (3) Data frame sent\nI0514 10:50:22.260583 113 log.go:172] (0xc0007c4c60) Data frame received for 3\nI0514 10:50:22.260587 113 log.go:172] (0xc0007eab40) (3) Data frame handling\nI0514 10:50:22.260609 113 log.go:172] (0xc0007c4c60) Data frame received for 5\nI0514 10:50:22.260615 113 log.go:172] (0xc0007eabe0) (5) Data frame handling\nI0514 10:50:22.262599 113 log.go:172] (0xc0007c4c60) Data frame received for 1\nI0514 10:50:22.262621 113 log.go:172] (0xc000883900) (1) Data frame handling\nI0514 10:50:22.262629 113 log.go:172] (0xc000883900) (1) Data frame sent\nI0514 10:50:22.262784 113 log.go:172] (0xc0007c4c60) (0xc000883900) Stream removed, broadcasting: 1\nI0514 10:50:22.262943 113 log.go:172] (0xc0007c4c60) (0xc000883900) Stream removed, broadcasting: 1\nI0514 10:50:22.262962 113 log.go:172] (0xc0007c4c60) (0xc0007eab40) Stream removed, broadcasting: 3\nI0514 10:50:22.263019 113 log.go:172] (0xc0007c4c60) Go away received\nI0514 10:50:22.263099 113 log.go:172] (0xc0007c4c60) (0xc0007eabe0) Stream removed, broadcasting: 5\n" May 14 10:50:22.269: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" May 14 10:50:22.269: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' STEP: Updating StatefulSet template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine May 14 10:50:32.298: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Updating Pods in reverse ordinal order May 14 10:50:42.538: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-qb524 ss2-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 14 10:50:42.772: INFO: stderr: "I0514 10:50:42.712868 134 log.go:172] (0xc000878210) (0xc0008765a0) Create stream\nI0514 10:50:42.712930 134 log.go:172] (0xc000878210) (0xc0008765a0) Stream added, broadcasting: 1\nI0514 10:50:42.715174 134 log.go:172] (0xc000878210) Reply frame received for 1\nI0514 10:50:42.715233 134 log.go:172] (0xc000878210) (0xc000876640) Create stream\nI0514 10:50:42.715244 134 log.go:172] (0xc000878210) (0xc000876640) Stream added, broadcasting: 3\nI0514 10:50:42.716003 134 log.go:172] (0xc000878210) Reply frame received for 3\nI0514 10:50:42.716035 134 log.go:172] (0xc000878210) (0xc0008766e0) Create stream\nI0514 10:50:42.716045 134 log.go:172] (0xc000878210) (0xc0008766e0) Stream added, broadcasting: 5\nI0514 10:50:42.716802 134 log.go:172] (0xc000878210) Reply frame received for 5\nI0514 10:50:42.768065 134 log.go:172] (0xc000878210) Data frame received for 5\nI0514 10:50:42.768112 134 log.go:172] (0xc0008766e0) (5) Data frame handling\nI0514 10:50:42.768135 134 log.go:172] (0xc000878210) Data frame received for 3\nI0514 10:50:42.768145 134 log.go:172] (0xc000876640) (3) Data frame handling\nI0514 10:50:42.768155 134 log.go:172] (0xc000876640) (3) Data frame sent\nI0514 10:50:42.768163 134 log.go:172] (0xc000878210) Data frame received for 3\nI0514 10:50:42.768171 134 log.go:172] (0xc000876640) (3) Data frame handling\nI0514 10:50:42.769492 134 log.go:172] (0xc000878210) Data frame received for 1\nI0514 10:50:42.769520 134 log.go:172] (0xc0008765a0) (1) Data frame handling\nI0514 10:50:42.769551 134 log.go:172] (0xc0008765a0) (1) Data frame sent\nI0514 10:50:42.769569 134 log.go:172] (0xc000878210) (0xc0008765a0) Stream removed, broadcasting: 1\nI0514 10:50:42.769586 134 log.go:172] (0xc000878210) Go away received\nI0514 10:50:42.769763 134 log.go:172] (0xc000878210) (0xc0008765a0) Stream removed, broadcasting: 1\nI0514 10:50:42.769774 134 log.go:172] (0xc000878210) (0xc000876640) Stream removed, broadcasting: 3\nI0514 10:50:42.769780 134 log.go:172] (0xc000878210) (0xc0008766e0) Stream removed, broadcasting: 5\n" May 14 10:50:42.773: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" May 14 10:50:42.773: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' May 14 10:50:52.794: INFO: Waiting for StatefulSet e2e-tests-statefulset-qb524/ss2 to complete update May 14 10:50:52.794: INFO: Waiting for Pod e2e-tests-statefulset-qb524/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c May 14 10:50:52.794: INFO: Waiting for Pod e2e-tests-statefulset-qb524/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c May 14 10:51:02.803: INFO: Waiting for StatefulSet e2e-tests-statefulset-qb524/ss2 to complete update May 14 10:51:02.803: INFO: Waiting for Pod e2e-tests-statefulset-qb524/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c May 14 10:51:12.801: INFO: Waiting for StatefulSet e2e-tests-statefulset-qb524/ss2 to complete update STEP: Rolling back to a previous revision May 14 10:51:22.802: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-qb524 ss2-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' May 14 10:51:23.124: INFO: stderr: "I0514 10:51:22.944094 156 log.go:172] (0xc000138790) (0xc0005e1400) Create stream\nI0514 10:51:22.944151 156 log.go:172] (0xc000138790) (0xc0005e1400) Stream added, broadcasting: 1\nI0514 10:51:22.946631 156 log.go:172] (0xc000138790) Reply frame received for 1\nI0514 10:51:22.946683 156 log.go:172] (0xc000138790) (0xc0005e14a0) Create stream\nI0514 10:51:22.946697 156 log.go:172] (0xc000138790) (0xc0005e14a0) Stream added, broadcasting: 3\nI0514 10:51:22.947757 156 log.go:172] (0xc000138790) Reply frame received for 3\nI0514 10:51:22.947827 156 log.go:172] (0xc000138790) (0xc0006f8000) Create stream\nI0514 10:51:22.947842 156 log.go:172] (0xc000138790) (0xc0006f8000) Stream added, broadcasting: 5\nI0514 10:51:22.948750 156 log.go:172] (0xc000138790) Reply frame received for 5\nI0514 10:51:23.116837 156 log.go:172] (0xc000138790) Data frame received for 5\nI0514 10:51:23.116889 156 log.go:172] (0xc0006f8000) (5) Data frame handling\nI0514 10:51:23.116924 156 log.go:172] (0xc000138790) Data frame received for 3\nI0514 10:51:23.116940 156 log.go:172] (0xc0005e14a0) (3) Data frame handling\nI0514 10:51:23.116968 156 log.go:172] (0xc0005e14a0) (3) Data frame sent\nI0514 10:51:23.116996 156 log.go:172] (0xc000138790) Data frame received for 3\nI0514 10:51:23.117010 156 log.go:172] (0xc0005e14a0) (3) Data frame handling\nI0514 10:51:23.120125 156 log.go:172] (0xc000138790) Data frame received for 1\nI0514 10:51:23.120159 156 log.go:172] (0xc0005e1400) (1) Data frame handling\nI0514 10:51:23.120196 156 log.go:172] (0xc0005e1400) (1) Data frame sent\nI0514 10:51:23.120225 156 log.go:172] (0xc000138790) (0xc0005e1400) Stream removed, broadcasting: 1\nI0514 10:51:23.120388 156 log.go:172] (0xc000138790) Go away received\nI0514 10:51:23.120521 156 log.go:172] (0xc000138790) (0xc0005e1400) Stream removed, broadcasting: 1\nI0514 10:51:23.120557 156 log.go:172] (0xc000138790) (0xc0005e14a0) Stream removed, broadcasting: 3\nI0514 10:51:23.120655 156 log.go:172] (0xc000138790) (0xc0006f8000) Stream removed, broadcasting: 5\n" May 14 10:51:23.124: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" May 14 10:51:23.124: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' May 14 10:51:33.158: INFO: Updating stateful set ss2 STEP: Rolling back update in reverse ordinal order May 14 10:51:43.201: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-qb524 ss2-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 14 10:51:43.448: INFO: stderr: "I0514 10:51:43.337772 178 log.go:172] (0xc000138790) (0xc00077c640) Create stream\nI0514 10:51:43.337843 178 log.go:172] (0xc000138790) (0xc00077c640) Stream added, broadcasting: 1\nI0514 10:51:43.339804 178 log.go:172] (0xc000138790) Reply frame received for 1\nI0514 10:51:43.339840 178 log.go:172] (0xc000138790) (0xc0005ceb40) Create stream\nI0514 10:51:43.339851 178 log.go:172] (0xc000138790) (0xc0005ceb40) Stream added, broadcasting: 3\nI0514 10:51:43.340749 178 log.go:172] (0xc000138790) Reply frame received for 3\nI0514 10:51:43.340774 178 log.go:172] (0xc000138790) (0xc00077c6e0) Create stream\nI0514 10:51:43.340786 178 log.go:172] (0xc000138790) (0xc00077c6e0) Stream added, broadcasting: 5\nI0514 10:51:43.341904 178 log.go:172] (0xc000138790) Reply frame received for 5\nI0514 10:51:43.441481 178 log.go:172] (0xc000138790) Data frame received for 5\nI0514 10:51:43.441527 178 log.go:172] (0xc00077c6e0) (5) Data frame handling\nI0514 10:51:43.441561 178 log.go:172] (0xc000138790) Data frame received for 3\nI0514 10:51:43.441612 178 log.go:172] (0xc0005ceb40) (3) Data frame handling\nI0514 10:51:43.441643 178 log.go:172] (0xc0005ceb40) (3) Data frame sent\nI0514 10:51:43.441658 178 log.go:172] (0xc000138790) Data frame received for 3\nI0514 10:51:43.441668 178 log.go:172] (0xc0005ceb40) (3) Data frame handling\nI0514 10:51:43.442881 178 log.go:172] (0xc000138790) Data frame received for 1\nI0514 10:51:43.442902 178 log.go:172] (0xc00077c640) (1) Data frame handling\nI0514 10:51:43.442920 178 log.go:172] (0xc00077c640) (1) Data frame sent\nI0514 10:51:43.442942 178 log.go:172] (0xc000138790) (0xc00077c640) Stream removed, broadcasting: 1\nI0514 10:51:43.443064 178 log.go:172] (0xc000138790) Go away received\nI0514 10:51:43.443133 178 log.go:172] (0xc000138790) (0xc00077c640) Stream removed, broadcasting: 1\nI0514 10:51:43.443149 178 log.go:172] (0xc000138790) (0xc0005ceb40) Stream removed, broadcasting: 3\nI0514 10:51:43.443158 178 log.go:172] (0xc000138790) (0xc00077c6e0) Stream removed, broadcasting: 5\n" May 14 10:51:43.448: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" May 14 10:51:43.448: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' May 14 10:51:53.466: INFO: Waiting for StatefulSet e2e-tests-statefulset-qb524/ss2 to complete update May 14 10:51:53.467: INFO: Waiting for Pod e2e-tests-statefulset-qb524/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd May 14 10:51:53.467: INFO: Waiting for Pod e2e-tests-statefulset-qb524/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd May 14 10:52:03.471: INFO: Waiting for StatefulSet e2e-tests-statefulset-qb524/ss2 to complete update May 14 10:52:03.471: INFO: Waiting for Pod e2e-tests-statefulset-qb524/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd May 14 10:52:13.474: INFO: Waiting for StatefulSet e2e-tests-statefulset-qb524/ss2 to complete update [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85 May 14 10:52:23.473: INFO: Deleting all statefulset in ns e2e-tests-statefulset-qb524 May 14 10:52:23.476: INFO: Scaling statefulset ss2 to 0 May 14 10:52:53.501: INFO: Waiting for statefulset status.replicas updated to 0 May 14 10:52:53.503: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 14 10:52:53.530: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-statefulset-qb524" for this suite. May 14 10:53:01.663: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 10:53:01.690: INFO: namespace: e2e-tests-statefulset-qb524, resource: bindings, ignored listing per whitelist May 14 10:53:01.738: INFO: namespace e2e-tests-statefulset-qb524 deletion completed in 8.204775238s • [SLOW TEST:179.871 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 14 10:53:01.739: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for all rs to be garbage collected STEP: expected 0 pods, got 2 pods STEP: expected 0 rs, got 1 rs STEP: Gathering metrics W0514 10:53:02.998616 6 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 14 10:53:02.998: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 14 10:53:02.998: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-sts55" for this suite. May 14 10:53:09.022: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 10:53:09.065: INFO: namespace: e2e-tests-gc-sts55, resource: bindings, ignored listing per whitelist May 14 10:53:09.087: INFO: namespace e2e-tests-gc-sts55 deletion completed in 6.085879952s • [SLOW TEST:7.348 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 14 10:53:09.087: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod liveness-exec in namespace e2e-tests-container-probe-g9thp May 14 10:53:15.257: INFO: Started pod liveness-exec in namespace e2e-tests-container-probe-g9thp STEP: checking the pod's current state and verifying that restartCount is present May 14 10:53:15.259: INFO: Initial restart count of pod liveness-exec is 0 May 14 10:54:07.931: INFO: Restart count of pod e2e-tests-container-probe-g9thp/liveness-exec is now 1 (52.672679566s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 14 10:54:07.946: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-g9thp" for this suite. May 14 10:54:13.983: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 10:54:14.007: INFO: namespace: e2e-tests-container-probe-g9thp, resource: bindings, ignored listing per whitelist May 14 10:54:14.059: INFO: namespace e2e-tests-container-probe-g9thp deletion completed in 6.089631381s • [SLOW TEST:64.972 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 14 10:54:14.060: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43 [It] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod May 14 10:54:14.184: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 14 10:54:21.579: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-init-container-bkd8z" for this suite. May 14 10:54:27.629: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 10:54:27.659: INFO: namespace: e2e-tests-init-container-bkd8z, resource: bindings, ignored listing per whitelist May 14 10:54:27.697: INFO: namespace e2e-tests-init-container-bkd8z deletion completed in 6.089976759s • [SLOW TEST:13.637 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 14 10:54:27.697: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin May 14 10:54:27.938: INFO: Waiting up to 5m0s for pod "downwardapi-volume-4813a867-95d1-11ea-9b22-0242ac110018" in namespace "e2e-tests-downward-api-qcfcj" to be "success or failure" May 14 10:54:27.956: INFO: Pod "downwardapi-volume-4813a867-95d1-11ea-9b22-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 18.190357ms May 14 10:54:29.960: INFO: Pod "downwardapi-volume-4813a867-95d1-11ea-9b22-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022009568s May 14 10:54:31.972: INFO: Pod "downwardapi-volume-4813a867-95d1-11ea-9b22-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 4.033462758s May 14 10:54:33.975: INFO: Pod "downwardapi-volume-4813a867-95d1-11ea-9b22-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.036911465s STEP: Saw pod success May 14 10:54:33.975: INFO: Pod "downwardapi-volume-4813a867-95d1-11ea-9b22-0242ac110018" satisfied condition "success or failure" May 14 10:54:33.978: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-4813a867-95d1-11ea-9b22-0242ac110018 container client-container: STEP: delete the pod May 14 10:54:33.999: INFO: Waiting for pod downwardapi-volume-4813a867-95d1-11ea-9b22-0242ac110018 to disappear May 14 10:54:34.004: INFO: Pod downwardapi-volume-4813a867-95d1-11ea-9b22-0242ac110018 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 14 10:54:34.004: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-qcfcj" for this suite. May 14 10:54:40.031: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 10:54:40.127: INFO: namespace: e2e-tests-downward-api-qcfcj, resource: bindings, ignored listing per whitelist May 14 10:54:40.164: INFO: namespace e2e-tests-downward-api-qcfcj deletion completed in 6.158551221s • [SLOW TEST:12.467 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 14 10:54:40.165: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating replication controller my-hostname-basic-4f8377d2-95d1-11ea-9b22-0242ac110018 May 14 10:54:40.340: INFO: Pod name my-hostname-basic-4f8377d2-95d1-11ea-9b22-0242ac110018: Found 0 pods out of 1 May 14 10:54:45.346: INFO: Pod name my-hostname-basic-4f8377d2-95d1-11ea-9b22-0242ac110018: Found 1 pods out of 1 May 14 10:54:45.346: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-4f8377d2-95d1-11ea-9b22-0242ac110018" are running May 14 10:54:45.349: INFO: Pod "my-hostname-basic-4f8377d2-95d1-11ea-9b22-0242ac110018-tnhrb" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-14 10:54:40 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-14 10:54:43 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-14 10:54:43 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-14 10:54:40 +0000 UTC Reason: Message:}]) May 14 10:54:45.349: INFO: Trying to dial the pod May 14 10:54:50.359: INFO: Controller my-hostname-basic-4f8377d2-95d1-11ea-9b22-0242ac110018: Got expected result from replica 1 [my-hostname-basic-4f8377d2-95d1-11ea-9b22-0242ac110018-tnhrb]: "my-hostname-basic-4f8377d2-95d1-11ea-9b22-0242ac110018-tnhrb", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 14 10:54:50.359: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-replication-controller-c5tm5" for this suite. May 14 10:54:56.378: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 10:54:56.410: INFO: namespace: e2e-tests-replication-controller-c5tm5, resource: bindings, ignored listing per whitelist May 14 10:54:56.446: INFO: namespace e2e-tests-replication-controller-c5tm5 deletion completed in 6.081329104s • [SLOW TEST:16.281 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 14 10:54:56.446: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 14 10:54:56.823: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 14 10:55:00.968: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-rqb7k" for this suite. May 14 10:55:43.177: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 10:55:43.193: INFO: namespace: e2e-tests-pods-rqb7k, resource: bindings, ignored listing per whitelist May 14 10:55:43.237: INFO: namespace e2e-tests-pods-rqb7k deletion completed in 42.116360552s • [SLOW TEST:46.792 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 14 10:55:43.238: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin May 14 10:55:43.407: INFO: Waiting up to 5m0s for pod "downwardapi-volume-75190f85-95d1-11ea-9b22-0242ac110018" in namespace "e2e-tests-downward-api-7fkzk" to be "success or failure" May 14 10:55:43.412: INFO: Pod "downwardapi-volume-75190f85-95d1-11ea-9b22-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 5.412282ms May 14 10:55:45.583: INFO: Pod "downwardapi-volume-75190f85-95d1-11ea-9b22-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.175651577s May 14 10:55:47.586: INFO: Pod "downwardapi-volume-75190f85-95d1-11ea-9b22-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 4.178867295s May 14 10:55:49.589: INFO: Pod "downwardapi-volume-75190f85-95d1-11ea-9b22-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.182439904s STEP: Saw pod success May 14 10:55:49.589: INFO: Pod "downwardapi-volume-75190f85-95d1-11ea-9b22-0242ac110018" satisfied condition "success or failure" May 14 10:55:49.592: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-75190f85-95d1-11ea-9b22-0242ac110018 container client-container: STEP: delete the pod May 14 10:55:49.612: INFO: Waiting for pod downwardapi-volume-75190f85-95d1-11ea-9b22-0242ac110018 to disappear May 14 10:55:49.617: INFO: Pod downwardapi-volume-75190f85-95d1-11ea-9b22-0242ac110018 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 14 10:55:49.617: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-7fkzk" for this suite. May 14 10:55:55.663: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 10:55:55.688: INFO: namespace: e2e-tests-downward-api-7fkzk, resource: bindings, ignored listing per whitelist May 14 10:55:55.738: INFO: namespace e2e-tests-downward-api-7fkzk deletion completed in 6.118533765s • [SLOW TEST:12.500 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl patch should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 14 10:55:55.738: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating Redis RC May 14 10:55:56.717: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-2mksm' May 14 10:55:57.099: INFO: stderr: "" May 14 10:55:57.099: INFO: stdout: "replicationcontroller/redis-master created\n" STEP: Waiting for Redis master to start. May 14 10:55:58.103: INFO: Selector matched 1 pods for map[app:redis] May 14 10:55:58.104: INFO: Found 0 / 1 May 14 10:55:59.103: INFO: Selector matched 1 pods for map[app:redis] May 14 10:55:59.103: INFO: Found 0 / 1 May 14 10:56:00.104: INFO: Selector matched 1 pods for map[app:redis] May 14 10:56:00.104: INFO: Found 0 / 1 May 14 10:56:01.103: INFO: Selector matched 1 pods for map[app:redis] May 14 10:56:01.103: INFO: Found 0 / 1 May 14 10:56:02.251: INFO: Selector matched 1 pods for map[app:redis] May 14 10:56:02.251: INFO: Found 1 / 1 May 14 10:56:02.251: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 STEP: patching all pods May 14 10:56:02.253: INFO: Selector matched 1 pods for map[app:redis] May 14 10:56:02.253: INFO: ForEach: Found 1 pods from the filter. Now looping through them. May 14 10:56:02.253: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config patch pod redis-master-x74g7 --namespace=e2e-tests-kubectl-2mksm -p {"metadata":{"annotations":{"x":"y"}}}' May 14 10:56:02.349: INFO: stderr: "" May 14 10:56:02.349: INFO: stdout: "pod/redis-master-x74g7 patched\n" STEP: checking annotations May 14 10:56:02.353: INFO: Selector matched 1 pods for map[app:redis] May 14 10:56:02.353: INFO: ForEach: Found 1 pods from the filter. Now looping through them. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 14 10:56:02.353: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-2mksm" for this suite. May 14 10:56:26.362: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 10:56:26.385: INFO: namespace: e2e-tests-kubectl-2mksm, resource: bindings, ignored listing per whitelist May 14 10:56:26.434: INFO: namespace e2e-tests-kubectl-2mksm deletion completed in 24.07957132s • [SLOW TEST:30.696 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl patch /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 14 10:56:26.434: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating the pod May 14 10:56:31.664: INFO: Successfully updated pod "labelsupdate8f0828ff-95d1-11ea-9b22-0242ac110018" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 14 10:56:34.090: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-x7fvp" for this suite. May 14 10:56:56.161: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 10:56:56.215: INFO: namespace: e2e-tests-downward-api-x7fvp, resource: bindings, ignored listing per whitelist May 14 10:56:56.234: INFO: namespace e2e-tests-downward-api-x7fvp deletion completed in 22.139860313s • [SLOW TEST:29.800 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl rolling-update should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 14 10:56:56.234: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1358 [It] should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine May 14 10:56:56.446: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=e2e-tests-kubectl-s6pg5' May 14 10:56:56.556: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" May 14 10:56:56.556: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n" STEP: verifying the rc e2e-test-nginx-rc was created May 14 10:56:56.560: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 0 spec.replicas 1 status.replicas 0 May 14 10:56:56.570: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0 STEP: rolling-update to same image controller May 14 10:56:56.620: INFO: scanned /root for discovery docs: May 14 10:56:56.620: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update e2e-test-nginx-rc --update-period=1s --image=docker.io/library/nginx:1.14-alpine --image-pull-policy=IfNotPresent --namespace=e2e-tests-kubectl-s6pg5' May 14 10:57:12.468: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" May 14 10:57:12.468: INFO: stdout: "Created e2e-test-nginx-rc-327bffb173fd92204bbfcbe90b72e0d8\nScaling up e2e-test-nginx-rc-327bffb173fd92204bbfcbe90b72e0d8 from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-327bffb173fd92204bbfcbe90b72e0d8 up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-327bffb173fd92204bbfcbe90b72e0d8 to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n" May 14 10:57:12.468: INFO: stdout: "Created e2e-test-nginx-rc-327bffb173fd92204bbfcbe90b72e0d8\nScaling up e2e-test-nginx-rc-327bffb173fd92204bbfcbe90b72e0d8 from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-327bffb173fd92204bbfcbe90b72e0d8 up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-327bffb173fd92204bbfcbe90b72e0d8 to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n" STEP: waiting for all containers in run=e2e-test-nginx-rc pods to come up. May 14 10:57:12.468: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=e2e-tests-kubectl-s6pg5' May 14 10:57:12.561: INFO: stderr: "" May 14 10:57:12.562: INFO: stdout: "e2e-test-nginx-rc-327bffb173fd92204bbfcbe90b72e0d8-dwdgj " May 14 10:57:12.562: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-327bffb173fd92204bbfcbe90b72e0d8-dwdgj -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-nginx-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-s6pg5' May 14 10:57:12.679: INFO: stderr: "" May 14 10:57:12.679: INFO: stdout: "true" May 14 10:57:12.679: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-327bffb173fd92204bbfcbe90b72e0d8-dwdgj -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-nginx-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-s6pg5' May 14 10:57:12.771: INFO: stderr: "" May 14 10:57:12.771: INFO: stdout: "docker.io/library/nginx:1.14-alpine" May 14 10:57:12.771: INFO: e2e-test-nginx-rc-327bffb173fd92204bbfcbe90b72e0d8-dwdgj is verified up and running [AfterEach] [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1364 May 14 10:57:12.771: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=e2e-tests-kubectl-s6pg5' May 14 10:57:12.860: INFO: stderr: "" May 14 10:57:12.860: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 14 10:57:12.860: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-s6pg5" for this suite. May 14 10:57:34.932: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 10:57:34.940: INFO: namespace: e2e-tests-kubectl-s6pg5, resource: bindings, ignored listing per whitelist May 14 10:57:34.998: INFO: namespace e2e-tests-kubectl-s6pg5 deletion completed in 22.078581997s • [SLOW TEST:38.764 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 14 10:57:34.998: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap e2e-tests-configmap-mzkvw/configmap-test-b7b5d2f3-95d1-11ea-9b22-0242ac110018 STEP: Creating a pod to test consume configMaps May 14 10:57:35.192: INFO: Waiting up to 5m0s for pod "pod-configmaps-b7bbe678-95d1-11ea-9b22-0242ac110018" in namespace "e2e-tests-configmap-mzkvw" to be "success or failure" May 14 10:57:35.208: INFO: Pod "pod-configmaps-b7bbe678-95d1-11ea-9b22-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 15.968632ms May 14 10:57:37.213: INFO: Pod "pod-configmaps-b7bbe678-95d1-11ea-9b22-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020730456s May 14 10:57:39.256: INFO: Pod "pod-configmaps-b7bbe678-95d1-11ea-9b22-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 4.063608734s May 14 10:57:41.259: INFO: Pod "pod-configmaps-b7bbe678-95d1-11ea-9b22-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.066696142s STEP: Saw pod success May 14 10:57:41.259: INFO: Pod "pod-configmaps-b7bbe678-95d1-11ea-9b22-0242ac110018" satisfied condition "success or failure" May 14 10:57:41.261: INFO: Trying to get logs from node hunter-worker2 pod pod-configmaps-b7bbe678-95d1-11ea-9b22-0242ac110018 container env-test: STEP: delete the pod May 14 10:57:41.275: INFO: Waiting for pod pod-configmaps-b7bbe678-95d1-11ea-9b22-0242ac110018 to disappear May 14 10:57:41.279: INFO: Pod pod-configmaps-b7bbe678-95d1-11ea-9b22-0242ac110018 no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 14 10:57:41.279: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-mzkvw" for this suite. May 14 10:57:47.289: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 10:57:47.356: INFO: namespace: e2e-tests-configmap-mzkvw, resource: bindings, ignored listing per whitelist May 14 10:57:47.366: INFO: namespace e2e-tests-configmap-mzkvw deletion completed in 6.085085882s • [SLOW TEST:12.368 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl logs should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 14 10:57:47.367: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1134 STEP: creating an rc May 14 10:57:47.428: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-7xmzx' May 14 10:57:50.128: INFO: stderr: "" May 14 10:57:50.128: INFO: stdout: "replicationcontroller/redis-master created\n" [It] should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Waiting for Redis master to start. May 14 10:57:51.132: INFO: Selector matched 1 pods for map[app:redis] May 14 10:57:51.132: INFO: Found 0 / 1 May 14 10:57:52.291: INFO: Selector matched 1 pods for map[app:redis] May 14 10:57:52.291: INFO: Found 0 / 1 May 14 10:57:53.147: INFO: Selector matched 1 pods for map[app:redis] May 14 10:57:53.147: INFO: Found 0 / 1 May 14 10:57:54.131: INFO: Selector matched 1 pods for map[app:redis] May 14 10:57:54.131: INFO: Found 0 / 1 May 14 10:57:55.309: INFO: Selector matched 1 pods for map[app:redis] May 14 10:57:55.309: INFO: Found 1 / 1 May 14 10:57:55.309: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 May 14 10:57:55.313: INFO: Selector matched 1 pods for map[app:redis] May 14 10:57:55.313: INFO: ForEach: Found 1 pods from the filter. Now looping through them. STEP: checking for a matching strings May 14 10:57:55.313: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-qnm2d redis-master --namespace=e2e-tests-kubectl-7xmzx' May 14 10:57:55.548: INFO: stderr: "" May 14 10:57:55.548: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 14 May 10:57:53.435 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 14 May 10:57:53.435 # Server started, Redis version 3.2.12\n1:M 14 May 10:57:53.435 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 14 May 10:57:53.435 * The server is now ready to accept connections on port 6379\n" STEP: limiting log lines May 14 10:57:55.548: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-qnm2d redis-master --namespace=e2e-tests-kubectl-7xmzx --tail=1' May 14 10:57:55.657: INFO: stderr: "" May 14 10:57:55.657: INFO: stdout: "1:M 14 May 10:57:53.435 * The server is now ready to accept connections on port 6379\n" STEP: limiting log bytes May 14 10:57:55.657: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-qnm2d redis-master --namespace=e2e-tests-kubectl-7xmzx --limit-bytes=1' May 14 10:57:55.779: INFO: stderr: "" May 14 10:57:55.779: INFO: stdout: " " STEP: exposing timestamps May 14 10:57:55.779: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-qnm2d redis-master --namespace=e2e-tests-kubectl-7xmzx --tail=1 --timestamps' May 14 10:57:55.906: INFO: stderr: "" May 14 10:57:55.906: INFO: stdout: "2020-05-14T10:57:53.435770943Z 1:M 14 May 10:57:53.435 * The server is now ready to accept connections on port 6379\n" STEP: restricting to a time range May 14 10:57:58.406: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-qnm2d redis-master --namespace=e2e-tests-kubectl-7xmzx --since=1s' May 14 10:57:58.625: INFO: stderr: "" May 14 10:57:58.625: INFO: stdout: "" May 14 10:57:58.626: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-qnm2d redis-master --namespace=e2e-tests-kubectl-7xmzx --since=24h' May 14 10:57:58.738: INFO: stderr: "" May 14 10:57:58.738: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 14 May 10:57:53.435 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 14 May 10:57:53.435 # Server started, Redis version 3.2.12\n1:M 14 May 10:57:53.435 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 14 May 10:57:53.435 * The server is now ready to accept connections on port 6379\n" [AfterEach] [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1140 STEP: using delete to clean up resources May 14 10:57:58.738: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-7xmzx' May 14 10:57:58.832: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 14 10:57:58.832: INFO: stdout: "replicationcontroller \"redis-master\" force deleted\n" May 14 10:57:58.832: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=nginx --no-headers --namespace=e2e-tests-kubectl-7xmzx' May 14 10:57:59.219: INFO: stderr: "No resources found.\n" May 14 10:57:59.219: INFO: stdout: "" May 14 10:57:59.219: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=nginx --namespace=e2e-tests-kubectl-7xmzx -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 14 10:57:59.338: INFO: stderr: "" May 14 10:57:59.338: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 14 10:57:59.338: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-7xmzx" for this suite. May 14 10:58:17.393: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 10:58:17.426: INFO: namespace: e2e-tests-kubectl-7xmzx, resource: bindings, ignored listing per whitelist May 14 10:58:17.456: INFO: namespace e2e-tests-kubectl-7xmzx deletion completed in 18.115514779s • [SLOW TEST:30.089 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 14 10:58:17.456: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-d108c570-95d1-11ea-9b22-0242ac110018 STEP: Creating a pod to test consume configMaps May 14 10:58:17.634: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-d1097c16-95d1-11ea-9b22-0242ac110018" in namespace "e2e-tests-projected-nxbl2" to be "success or failure" May 14 10:58:17.717: INFO: Pod "pod-projected-configmaps-d1097c16-95d1-11ea-9b22-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 82.995035ms May 14 10:58:19.920: INFO: Pod "pod-projected-configmaps-d1097c16-95d1-11ea-9b22-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.28570817s May 14 10:58:21.923: INFO: Pod "pod-projected-configmaps-d1097c16-95d1-11ea-9b22-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.289467038s STEP: Saw pod success May 14 10:58:21.923: INFO: Pod "pod-projected-configmaps-d1097c16-95d1-11ea-9b22-0242ac110018" satisfied condition "success or failure" May 14 10:58:21.926: INFO: Trying to get logs from node hunter-worker2 pod pod-projected-configmaps-d1097c16-95d1-11ea-9b22-0242ac110018 container projected-configmap-volume-test: STEP: delete the pod May 14 10:58:21.984: INFO: Waiting for pod pod-projected-configmaps-d1097c16-95d1-11ea-9b22-0242ac110018 to disappear May 14 10:58:22.076: INFO: Pod pod-projected-configmaps-d1097c16-95d1-11ea-9b22-0242ac110018 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 14 10:58:22.076: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-nxbl2" for this suite. May 14 10:58:28.133: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 10:58:28.166: INFO: namespace: e2e-tests-projected-nxbl2, resource: bindings, ignored listing per whitelist May 14 10:58:28.213: INFO: namespace e2e-tests-projected-nxbl2 deletion completed in 6.133042834s • [SLOW TEST:10.757 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 14 10:58:28.213: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with downward pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod pod-subpath-test-downwardapi-ss6q STEP: Creating a pod to test atomic-volume-subpath May 14 10:58:28.330: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-ss6q" in namespace "e2e-tests-subpath-xz4ch" to be "success or failure" May 14 10:58:28.347: INFO: Pod "pod-subpath-test-downwardapi-ss6q": Phase="Pending", Reason="", readiness=false. Elapsed: 16.889042ms May 14 10:58:30.484: INFO: Pod "pod-subpath-test-downwardapi-ss6q": Phase="Pending", Reason="", readiness=false. Elapsed: 2.153721484s May 14 10:58:32.554: INFO: Pod "pod-subpath-test-downwardapi-ss6q": Phase="Pending", Reason="", readiness=false. Elapsed: 4.224445809s May 14 10:58:34.574: INFO: Pod "pod-subpath-test-downwardapi-ss6q": Phase="Pending", Reason="", readiness=false. Elapsed: 6.243902612s May 14 10:58:36.579: INFO: Pod "pod-subpath-test-downwardapi-ss6q": Phase="Running", Reason="", readiness=false. Elapsed: 8.248709191s May 14 10:58:38.584: INFO: Pod "pod-subpath-test-downwardapi-ss6q": Phase="Running", Reason="", readiness=false. Elapsed: 10.253718607s May 14 10:58:40.587: INFO: Pod "pod-subpath-test-downwardapi-ss6q": Phase="Running", Reason="", readiness=false. Elapsed: 12.257229335s May 14 10:58:42.590: INFO: Pod "pod-subpath-test-downwardapi-ss6q": Phase="Running", Reason="", readiness=false. Elapsed: 14.260636968s May 14 10:58:44.594: INFO: Pod "pod-subpath-test-downwardapi-ss6q": Phase="Running", Reason="", readiness=false. Elapsed: 16.264585801s May 14 10:58:46.598: INFO: Pod "pod-subpath-test-downwardapi-ss6q": Phase="Running", Reason="", readiness=false. Elapsed: 18.268030711s May 14 10:58:48.602: INFO: Pod "pod-subpath-test-downwardapi-ss6q": Phase="Running", Reason="", readiness=false. Elapsed: 20.272366653s May 14 10:58:50.612: INFO: Pod "pod-subpath-test-downwardapi-ss6q": Phase="Running", Reason="", readiness=false. Elapsed: 22.281964219s May 14 10:58:52.617: INFO: Pod "pod-subpath-test-downwardapi-ss6q": Phase="Running", Reason="", readiness=false. Elapsed: 24.287535329s May 14 10:58:54.621: INFO: Pod "pod-subpath-test-downwardapi-ss6q": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.291141936s STEP: Saw pod success May 14 10:58:54.621: INFO: Pod "pod-subpath-test-downwardapi-ss6q" satisfied condition "success or failure" May 14 10:58:54.623: INFO: Trying to get logs from node hunter-worker pod pod-subpath-test-downwardapi-ss6q container test-container-subpath-downwardapi-ss6q: STEP: delete the pod May 14 10:58:54.656: INFO: Waiting for pod pod-subpath-test-downwardapi-ss6q to disappear May 14 10:58:54.687: INFO: Pod pod-subpath-test-downwardapi-ss6q no longer exists STEP: Deleting pod pod-subpath-test-downwardapi-ss6q May 14 10:58:54.687: INFO: Deleting pod "pod-subpath-test-downwardapi-ss6q" in namespace "e2e-tests-subpath-xz4ch" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 14 10:58:54.690: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-subpath-xz4ch" for this suite. May 14 10:59:00.701: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 10:59:00.774: INFO: namespace: e2e-tests-subpath-xz4ch, resource: bindings, ignored listing per whitelist May 14 10:59:00.846: INFO: namespace e2e-tests-subpath-xz4ch deletion completed in 6.153174761s • [SLOW TEST:32.633 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with downward pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 14 10:59:00.847: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 14 10:59:01.059: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubelet-test-bhqst" for this suite. May 14 10:59:07.079: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 10:59:07.157: INFO: namespace: e2e-tests-kubelet-test-bhqst, resource: bindings, ignored listing per whitelist May 14 10:59:07.161: INFO: namespace e2e-tests-kubelet-test-bhqst deletion completed in 6.095574302s • [SLOW TEST:6.314 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78 should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 14 10:59:07.161: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:295 [It] should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a replication controller May 14 10:59:07.300: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-sx4bt' May 14 10:59:07.546: INFO: stderr: "" May 14 10:59:07.546: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. May 14 10:59:07.546: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-sx4bt' May 14 10:59:07.683: INFO: stderr: "" May 14 10:59:07.683: INFO: stdout: "update-demo-nautilus-pjvrl update-demo-nautilus-zbt9b " May 14 10:59:07.683: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-pjvrl -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-sx4bt' May 14 10:59:07.795: INFO: stderr: "" May 14 10:59:07.795: INFO: stdout: "" May 14 10:59:07.795: INFO: update-demo-nautilus-pjvrl is created but not running May 14 10:59:12.795: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-sx4bt' May 14 10:59:12.915: INFO: stderr: "" May 14 10:59:12.915: INFO: stdout: "update-demo-nautilus-pjvrl update-demo-nautilus-zbt9b " May 14 10:59:12.915: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-pjvrl -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-sx4bt' May 14 10:59:13.022: INFO: stderr: "" May 14 10:59:13.022: INFO: stdout: "true" May 14 10:59:13.022: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-pjvrl -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-sx4bt' May 14 10:59:13.127: INFO: stderr: "" May 14 10:59:13.127: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 14 10:59:13.127: INFO: validating pod update-demo-nautilus-pjvrl May 14 10:59:13.146: INFO: got data: { "image": "nautilus.jpg" } May 14 10:59:13.146: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 14 10:59:13.146: INFO: update-demo-nautilus-pjvrl is verified up and running May 14 10:59:13.146: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-zbt9b -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-sx4bt' May 14 10:59:13.242: INFO: stderr: "" May 14 10:59:13.242: INFO: stdout: "true" May 14 10:59:13.242: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-zbt9b -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-sx4bt' May 14 10:59:13.343: INFO: stderr: "" May 14 10:59:13.343: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 14 10:59:13.343: INFO: validating pod update-demo-nautilus-zbt9b May 14 10:59:13.347: INFO: got data: { "image": "nautilus.jpg" } May 14 10:59:13.347: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 14 10:59:13.347: INFO: update-demo-nautilus-zbt9b is verified up and running STEP: using delete to clean up resources May 14 10:59:13.347: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-sx4bt' May 14 10:59:13.454: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 14 10:59:13.454: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" May 14 10:59:13.454: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=e2e-tests-kubectl-sx4bt' May 14 10:59:13.556: INFO: stderr: "No resources found.\n" May 14 10:59:13.556: INFO: stdout: "" May 14 10:59:13.556: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=e2e-tests-kubectl-sx4bt -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 14 10:59:13.655: INFO: stderr: "" May 14 10:59:13.655: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 14 10:59:13.655: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-sx4bt" for this suite. May 14 10:59:35.701: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 10:59:35.720: INFO: namespace: e2e-tests-kubectl-sx4bt, resource: bindings, ignored listing per whitelist May 14 10:59:35.772: INFO: namespace e2e-tests-kubectl-sx4bt deletion completed in 22.113273558s • [SLOW TEST:28.611 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl label should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 14 10:59:35.772: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1052 STEP: creating the pod May 14 10:59:35.911: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-nb2jr' May 14 10:59:36.159: INFO: stderr: "" May 14 10:59:36.159: INFO: stdout: "pod/pause created\n" May 14 10:59:36.159: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] May 14 10:59:36.159: INFO: Waiting up to 5m0s for pod "pause" in namespace "e2e-tests-kubectl-nb2jr" to be "running and ready" May 14 10:59:36.293: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 134.618768ms May 14 10:59:38.296: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.137572711s May 14 10:59:40.300: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 4.141228371s May 14 10:59:40.300: INFO: Pod "pause" satisfied condition "running and ready" May 14 10:59:40.300: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] [It] should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: adding the label testing-label with value testing-label-value to a pod May 14 10:59:40.300: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=e2e-tests-kubectl-nb2jr' May 14 10:59:40.414: INFO: stderr: "" May 14 10:59:40.414: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod has the label testing-label with the value testing-label-value May 14 10:59:40.414: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=e2e-tests-kubectl-nb2jr' May 14 10:59:40.526: INFO: stderr: "" May 14 10:59:40.526: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 4s testing-label-value\n" STEP: removing the label testing-label of a pod May 14 10:59:40.526: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=e2e-tests-kubectl-nb2jr' May 14 10:59:40.634: INFO: stderr: "" May 14 10:59:40.634: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod doesn't have the label testing-label May 14 10:59:40.634: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=e2e-tests-kubectl-nb2jr' May 14 10:59:40.742: INFO: stderr: "" May 14 10:59:40.742: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 4s \n" [AfterEach] [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1059 STEP: using delete to clean up resources May 14 10:59:40.742: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-nb2jr' May 14 10:59:40.873: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 14 10:59:40.873: INFO: stdout: "pod \"pause\" force deleted\n" May 14 10:59:40.873: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=e2e-tests-kubectl-nb2jr' May 14 10:59:40.992: INFO: stderr: "No resources found.\n" May 14 10:59:40.992: INFO: stdout: "" May 14 10:59:40.992: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=e2e-tests-kubectl-nb2jr -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 14 10:59:41.098: INFO: stderr: "" May 14 10:59:41.098: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 14 10:59:41.098: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-nb2jr" for this suite. May 14 10:59:47.113: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 10:59:47.166: INFO: namespace: e2e-tests-kubectl-nb2jr, resource: bindings, ignored listing per whitelist May 14 10:59:47.180: INFO: namespace e2e-tests-kubectl-nb2jr deletion completed in 6.078687708s • [SLOW TEST:11.408 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 14 10:59:47.181: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-map-067b6c31-95d2-11ea-9b22-0242ac110018 STEP: Creating a pod to test consume configMaps May 14 10:59:47.285: INFO: Waiting up to 5m0s for pod "pod-configmaps-067d903c-95d2-11ea-9b22-0242ac110018" in namespace "e2e-tests-configmap-dcjsn" to be "success or failure" May 14 10:59:47.299: INFO: Pod "pod-configmaps-067d903c-95d2-11ea-9b22-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 14.046742ms May 14 10:59:49.327: INFO: Pod "pod-configmaps-067d903c-95d2-11ea-9b22-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.041433954s May 14 10:59:51.346: INFO: Pod "pod-configmaps-067d903c-95d2-11ea-9b22-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 4.061081311s May 14 10:59:53.350: INFO: Pod "pod-configmaps-067d903c-95d2-11ea-9b22-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.06463582s STEP: Saw pod success May 14 10:59:53.350: INFO: Pod "pod-configmaps-067d903c-95d2-11ea-9b22-0242ac110018" satisfied condition "success or failure" May 14 10:59:53.353: INFO: Trying to get logs from node hunter-worker2 pod pod-configmaps-067d903c-95d2-11ea-9b22-0242ac110018 container configmap-volume-test: STEP: delete the pod May 14 10:59:53.381: INFO: Waiting for pod pod-configmaps-067d903c-95d2-11ea-9b22-0242ac110018 to disappear May 14 10:59:53.386: INFO: Pod pod-configmaps-067d903c-95d2-11ea-9b22-0242ac110018 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 14 10:59:53.386: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-dcjsn" for this suite. May 14 10:59:59.417: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 10:59:59.443: INFO: namespace: e2e-tests-configmap-dcjsn, resource: bindings, ignored listing per whitelist May 14 10:59:59.480: INFO: namespace e2e-tests-configmap-dcjsn deletion completed in 6.090107857s • [SLOW TEST:12.299 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl version should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 14 10:59:59.480: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 14 10:59:59.588: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version' May 14 10:59:59.720: INFO: stderr: "" May 14 10:59:59.720: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"13\", GitVersion:\"v1.13.12\", GitCommit:\"a8b52209ee172232b6db7a6e0ce2adc77458829f\", GitTreeState:\"clean\", BuildDate:\"2020-05-02T15:37:06Z\", GoVersion:\"go1.11.13\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"13\", GitVersion:\"v1.13.12\", GitCommit:\"a8b52209ee172232b6db7a6e0ce2adc77458829f\", GitTreeState:\"clean\", BuildDate:\"2020-01-14T01:07:14Z\", GoVersion:\"go1.11.13\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 14 10:59:59.720: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-4x6tm" for this suite. May 14 11:00:05.740: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 11:00:05.770: INFO: namespace: e2e-tests-kubectl-4x6tm, resource: bindings, ignored listing per whitelist May 14 11:00:05.815: INFO: namespace e2e-tests-kubectl-4x6tm deletion completed in 6.091732335s • [SLOW TEST:6.335 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl version /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 14 11:00:05.815: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin May 14 11:00:05.913: INFO: Waiting up to 5m0s for pod "downwardapi-volume-11970b1b-95d2-11ea-9b22-0242ac110018" in namespace "e2e-tests-projected-kb774" to be "success or failure" May 14 11:00:05.918: INFO: Pod "downwardapi-volume-11970b1b-95d2-11ea-9b22-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 4.128551ms May 14 11:00:07.921: INFO: Pod "downwardapi-volume-11970b1b-95d2-11ea-9b22-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008056866s May 14 11:00:09.925: INFO: Pod "downwardapi-volume-11970b1b-95d2-11ea-9b22-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011990208s STEP: Saw pod success May 14 11:00:09.925: INFO: Pod "downwardapi-volume-11970b1b-95d2-11ea-9b22-0242ac110018" satisfied condition "success or failure" May 14 11:00:09.928: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-11970b1b-95d2-11ea-9b22-0242ac110018 container client-container: STEP: delete the pod May 14 11:00:09.984: INFO: Waiting for pod downwardapi-volume-11970b1b-95d2-11ea-9b22-0242ac110018 to disappear May 14 11:00:10.012: INFO: Pod downwardapi-volume-11970b1b-95d2-11ea-9b22-0242ac110018 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 14 11:00:10.012: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-kb774" for this suite. May 14 11:00:16.058: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 11:00:16.067: INFO: namespace: e2e-tests-projected-kb774, resource: bindings, ignored listing per whitelist May 14 11:00:16.130: INFO: namespace e2e-tests-projected-kb774 deletion completed in 6.087264223s • [SLOW TEST:10.315 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 14 11:00:16.130: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43 [It] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod May 14 11:00:16.301: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 14 11:00:24.485: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-init-container-qczmq" for this suite. May 14 11:00:46.503: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 11:00:46.542: INFO: namespace: e2e-tests-init-container-qczmq, resource: bindings, ignored listing per whitelist May 14 11:00:46.580: INFO: namespace e2e-tests-init-container-qczmq deletion completed in 22.084570118s • [SLOW TEST:30.449 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 14 11:00:46.580: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0777 on node default medium May 14 11:00:46.805: INFO: Waiting up to 5m0s for pod "pod-29f7bc1b-95d2-11ea-9b22-0242ac110018" in namespace "e2e-tests-emptydir-szttp" to be "success or failure" May 14 11:00:46.849: INFO: Pod "pod-29f7bc1b-95d2-11ea-9b22-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 43.937379ms May 14 11:00:48.852: INFO: Pod "pod-29f7bc1b-95d2-11ea-9b22-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.046714223s May 14 11:00:50.856: INFO: Pod "pod-29f7bc1b-95d2-11ea-9b22-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.050678349s STEP: Saw pod success May 14 11:00:50.856: INFO: Pod "pod-29f7bc1b-95d2-11ea-9b22-0242ac110018" satisfied condition "success or failure" May 14 11:00:50.859: INFO: Trying to get logs from node hunter-worker2 pod pod-29f7bc1b-95d2-11ea-9b22-0242ac110018 container test-container: STEP: delete the pod May 14 11:00:51.092: INFO: Waiting for pod pod-29f7bc1b-95d2-11ea-9b22-0242ac110018 to disappear May 14 11:00:51.248: INFO: Pod pod-29f7bc1b-95d2-11ea-9b22-0242ac110018 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 14 11:00:51.248: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-szttp" for this suite. May 14 11:00:57.302: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 11:00:57.366: INFO: namespace: e2e-tests-emptydir-szttp, resource: bindings, ignored listing per whitelist May 14 11:00:57.375: INFO: namespace e2e-tests-emptydir-szttp deletion completed in 6.123128608s • [SLOW TEST:10.795 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [k8s.io] Probing container should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 14 11:00:57.375: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod liveness-exec in namespace e2e-tests-container-probe-99qm8 May 14 11:01:01.523: INFO: Started pod liveness-exec in namespace e2e-tests-container-probe-99qm8 STEP: checking the pod's current state and verifying that restartCount is present May 14 11:01:01.526: INFO: Initial restart count of pod liveness-exec is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 14 11:05:02.262: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-99qm8" for this suite. May 14 11:05:08.312: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 11:05:08.370: INFO: namespace: e2e-tests-container-probe-99qm8, resource: bindings, ignored listing per whitelist May 14 11:05:08.374: INFO: namespace e2e-tests-container-probe-99qm8 deletion completed in 6.099992061s • [SLOW TEST:250.999 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 14 11:05:08.374: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the rc STEP: delete the rc STEP: wait for all pods to be garbage collected STEP: Gathering metrics W0514 11:05:18.567232 6 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 14 11:05:18.567: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 14 11:05:18.567: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-w97wf" for this suite. May 14 11:05:24.588: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 11:05:24.603: INFO: namespace: e2e-tests-gc-w97wf, resource: bindings, ignored listing per whitelist May 14 11:05:24.639: INFO: namespace e2e-tests-gc-w97wf deletion completed in 6.068563883s • [SLOW TEST:16.264 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 14 11:05:24.639: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: getting the auto-created API token STEP: Creating a pod to test consume service account token May 14 11:05:25.255: INFO: Waiting up to 5m0s for pod "pod-service-account-cfefece6-95d2-11ea-9b22-0242ac110018-n6d24" in namespace "e2e-tests-svcaccounts-mwf7w" to be "success or failure" May 14 11:05:25.327: INFO: Pod "pod-service-account-cfefece6-95d2-11ea-9b22-0242ac110018-n6d24": Phase="Pending", Reason="", readiness=false. Elapsed: 71.71361ms May 14 11:05:27.332: INFO: Pod "pod-service-account-cfefece6-95d2-11ea-9b22-0242ac110018-n6d24": Phase="Pending", Reason="", readiness=false. Elapsed: 2.076545861s May 14 11:05:29.336: INFO: Pod "pod-service-account-cfefece6-95d2-11ea-9b22-0242ac110018-n6d24": Phase="Pending", Reason="", readiness=false. Elapsed: 4.080575945s May 14 11:05:31.341: INFO: Pod "pod-service-account-cfefece6-95d2-11ea-9b22-0242ac110018-n6d24": Phase="Pending", Reason="", readiness=false. Elapsed: 6.085533039s May 14 11:05:33.362: INFO: Pod "pod-service-account-cfefece6-95d2-11ea-9b22-0242ac110018-n6d24": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.106926784s STEP: Saw pod success May 14 11:05:33.362: INFO: Pod "pod-service-account-cfefece6-95d2-11ea-9b22-0242ac110018-n6d24" satisfied condition "success or failure" May 14 11:05:33.365: INFO: Trying to get logs from node hunter-worker pod pod-service-account-cfefece6-95d2-11ea-9b22-0242ac110018-n6d24 container token-test: STEP: delete the pod May 14 11:05:33.388: INFO: Waiting for pod pod-service-account-cfefece6-95d2-11ea-9b22-0242ac110018-n6d24 to disappear May 14 11:05:33.404: INFO: Pod pod-service-account-cfefece6-95d2-11ea-9b22-0242ac110018-n6d24 no longer exists STEP: Creating a pod to test consume service account root CA May 14 11:05:33.408: INFO: Waiting up to 5m0s for pod "pod-service-account-cfefece6-95d2-11ea-9b22-0242ac110018-5p9fz" in namespace "e2e-tests-svcaccounts-mwf7w" to be "success or failure" May 14 11:05:33.512: INFO: Pod "pod-service-account-cfefece6-95d2-11ea-9b22-0242ac110018-5p9fz": Phase="Pending", Reason="", readiness=false. Elapsed: 104.268695ms May 14 11:05:35.515: INFO: Pod "pod-service-account-cfefece6-95d2-11ea-9b22-0242ac110018-5p9fz": Phase="Pending", Reason="", readiness=false. Elapsed: 2.107005088s May 14 11:05:37.518: INFO: Pod "pod-service-account-cfefece6-95d2-11ea-9b22-0242ac110018-5p9fz": Phase="Pending", Reason="", readiness=false. Elapsed: 4.11000147s May 14 11:05:39.522: INFO: Pod "pod-service-account-cfefece6-95d2-11ea-9b22-0242ac110018-5p9fz": Phase="Pending", Reason="", readiness=false. Elapsed: 6.113957492s May 14 11:05:41.527: INFO: Pod "pod-service-account-cfefece6-95d2-11ea-9b22-0242ac110018-5p9fz": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.118613162s STEP: Saw pod success May 14 11:05:41.527: INFO: Pod "pod-service-account-cfefece6-95d2-11ea-9b22-0242ac110018-5p9fz" satisfied condition "success or failure" May 14 11:05:41.530: INFO: Trying to get logs from node hunter-worker2 pod pod-service-account-cfefece6-95d2-11ea-9b22-0242ac110018-5p9fz container root-ca-test: STEP: delete the pod May 14 11:05:41.563: INFO: Waiting for pod pod-service-account-cfefece6-95d2-11ea-9b22-0242ac110018-5p9fz to disappear May 14 11:05:41.597: INFO: Pod pod-service-account-cfefece6-95d2-11ea-9b22-0242ac110018-5p9fz no longer exists STEP: Creating a pod to test consume service account namespace May 14 11:05:41.601: INFO: Waiting up to 5m0s for pod "pod-service-account-cfefece6-95d2-11ea-9b22-0242ac110018-9kd54" in namespace "e2e-tests-svcaccounts-mwf7w" to be "success or failure" May 14 11:05:41.616: INFO: Pod "pod-service-account-cfefece6-95d2-11ea-9b22-0242ac110018-9kd54": Phase="Pending", Reason="", readiness=false. Elapsed: 14.183718ms May 14 11:05:43.620: INFO: Pod "pod-service-account-cfefece6-95d2-11ea-9b22-0242ac110018-9kd54": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018592916s May 14 11:05:45.624: INFO: Pod "pod-service-account-cfefece6-95d2-11ea-9b22-0242ac110018-9kd54": Phase="Pending", Reason="", readiness=false. Elapsed: 4.022242039s May 14 11:05:47.967: INFO: Pod "pod-service-account-cfefece6-95d2-11ea-9b22-0242ac110018-9kd54": Phase="Pending", Reason="", readiness=false. Elapsed: 6.36544405s May 14 11:05:49.970: INFO: Pod "pod-service-account-cfefece6-95d2-11ea-9b22-0242ac110018-9kd54": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.368949147s STEP: Saw pod success May 14 11:05:49.970: INFO: Pod "pod-service-account-cfefece6-95d2-11ea-9b22-0242ac110018-9kd54" satisfied condition "success or failure" May 14 11:05:49.973: INFO: Trying to get logs from node hunter-worker pod pod-service-account-cfefece6-95d2-11ea-9b22-0242ac110018-9kd54 container namespace-test: STEP: delete the pod May 14 11:05:50.009: INFO: Waiting for pod pod-service-account-cfefece6-95d2-11ea-9b22-0242ac110018-9kd54 to disappear May 14 11:05:50.022: INFO: Pod pod-service-account-cfefece6-95d2-11ea-9b22-0242ac110018-9kd54 no longer exists [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 14 11:05:50.022: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-svcaccounts-mwf7w" for this suite. May 14 11:05:56.053: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 11:05:56.104: INFO: namespace: e2e-tests-svcaccounts-mwf7w, resource: bindings, ignored listing per whitelist May 14 11:05:56.131: INFO: namespace e2e-tests-svcaccounts-mwf7w deletion completed in 6.105825475s • [SLOW TEST:31.492 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:22 should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 14 11:05:56.131: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod STEP: setting up watch STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: verifying pod creation was observed May 14 11:06:00.306: INFO: running pod: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-submit-remove-e2630d1d-95d2-11ea-9b22-0242ac110018", GenerateName:"", Namespace:"e2e-tests-pods-mxnbw", SelfLink:"/api/v1/namespaces/e2e-tests-pods-mxnbw/pods/pod-submit-remove-e2630d1d-95d2-11ea-9b22-0242ac110018", UID:"e26e73c1-95d2-11ea-99e8-0242ac110002", ResourceVersion:"10517082", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63725051156, loc:(*time.Location)(0x7950ac0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"203453029"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-zdpnd", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc001eb6d00), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"nginx", Image:"docker.io/library/nginx:1.14-alpine", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-zdpnd", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc00172ad78), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"hunter-worker2", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc001dc20c0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc00172adc0)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc00172ade0)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc00172ade8), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc00172adec)}, Status:v1.PodStatus{Phase:"Running", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725051156, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"Ready", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725051160, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"ContainersReady", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725051160, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725051156, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.17.0.4", PodIP:"10.244.2.91", StartTime:(*v1.Time)(0xc000fb7e40), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"nginx", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(0xc000fb7e60), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:true, RestartCount:0, Image:"docker.io/library/nginx:1.14-alpine", ImageID:"docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7", ContainerID:"containerd://ed9056c5f1ce993bcaa4a1d1e128867fd2da165d67840592eaf5819a91b39a80"}}, QOSClass:"BestEffort"}} STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice May 14 11:06:05.318: INFO: no pod exists with the name we were looking for, assuming the termination request was observed and completed STEP: verifying pod deletion was observed [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 14 11:06:05.321: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-mxnbw" for this suite. May 14 11:06:11.349: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 11:06:11.376: INFO: namespace: e2e-tests-pods-mxnbw, resource: bindings, ignored listing per whitelist May 14 11:06:11.421: INFO: namespace e2e-tests-pods-mxnbw deletion completed in 6.09676015s • [SLOW TEST:15.289 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 14 11:06:11.421: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-5t6dl A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.e2e-tests-dns-5t6dl;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-5t6dl A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.e2e-tests-dns-5t6dl;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-5t6dl.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.e2e-tests-dns-5t6dl.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-5t6dl.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.e2e-tests-dns-5t6dl.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-5t6dl.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-5t6dl.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-5t6dl.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.e2e-tests-dns-5t6dl.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-5t6dl.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.e2e-tests-dns-5t6dl.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-5t6dl.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.e2e-tests-dns-5t6dl.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-5t6dl.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 43.3.110.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.110.3.43_udp@PTR;check="$$(dig +tcp +noall +answer +search 43.3.110.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.110.3.43_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-5t6dl A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.e2e-tests-dns-5t6dl;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-5t6dl A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.e2e-tests-dns-5t6dl;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-5t6dl.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.e2e-tests-dns-5t6dl.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-5t6dl.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.e2e-tests-dns-5t6dl.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-5t6dl.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-5t6dl.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-5t6dl.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-5t6dl.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-5t6dl.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.e2e-tests-dns-5t6dl.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-5t6dl.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.e2e-tests-dns-5t6dl.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-5t6dl.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 43.3.110.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.110.3.43_udp@PTR;check="$$(dig +tcp +noall +answer +search 43.3.110.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.110.3.43_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 14 11:06:21.644: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-5t6dl.svc from pod e2e-tests-dns-5t6dl/dns-test-eb877a68-95d2-11ea-9b22-0242ac110018: the server could not find the requested resource (get pods dns-test-eb877a68-95d2-11ea-9b22-0242ac110018) May 14 11:06:21.667: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-5t6dl/dns-test-eb877a68-95d2-11ea-9b22-0242ac110018: the server could not find the requested resource (get pods dns-test-eb877a68-95d2-11ea-9b22-0242ac110018) May 14 11:06:21.670: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-5t6dl/dns-test-eb877a68-95d2-11ea-9b22-0242ac110018: the server could not find the requested resource (get pods dns-test-eb877a68-95d2-11ea-9b22-0242ac110018) May 14 11:06:21.672: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-5t6dl from pod e2e-tests-dns-5t6dl/dns-test-eb877a68-95d2-11ea-9b22-0242ac110018: the server could not find the requested resource (get pods dns-test-eb877a68-95d2-11ea-9b22-0242ac110018) May 14 11:06:21.675: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-5t6dl from pod e2e-tests-dns-5t6dl/dns-test-eb877a68-95d2-11ea-9b22-0242ac110018: the server could not find the requested resource (get pods dns-test-eb877a68-95d2-11ea-9b22-0242ac110018) May 14 11:06:21.677: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-5t6dl.svc from pod e2e-tests-dns-5t6dl/dns-test-eb877a68-95d2-11ea-9b22-0242ac110018: the server could not find the requested resource (get pods dns-test-eb877a68-95d2-11ea-9b22-0242ac110018) May 14 11:06:21.680: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-5t6dl.svc from pod e2e-tests-dns-5t6dl/dns-test-eb877a68-95d2-11ea-9b22-0242ac110018: the server could not find the requested resource (get pods dns-test-eb877a68-95d2-11ea-9b22-0242ac110018) May 14 11:06:21.683: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-5t6dl.svc from pod e2e-tests-dns-5t6dl/dns-test-eb877a68-95d2-11ea-9b22-0242ac110018: the server could not find the requested resource (get pods dns-test-eb877a68-95d2-11ea-9b22-0242ac110018) May 14 11:06:21.685: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-5t6dl.svc from pod e2e-tests-dns-5t6dl/dns-test-eb877a68-95d2-11ea-9b22-0242ac110018: the server could not find the requested resource (get pods dns-test-eb877a68-95d2-11ea-9b22-0242ac110018) May 14 11:06:21.700: INFO: Lookups using e2e-tests-dns-5t6dl/dns-test-eb877a68-95d2-11ea-9b22-0242ac110018 failed for: [wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-5t6dl.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-5t6dl jessie_tcp@dns-test-service.e2e-tests-dns-5t6dl jessie_udp@dns-test-service.e2e-tests-dns-5t6dl.svc jessie_tcp@dns-test-service.e2e-tests-dns-5t6dl.svc jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-5t6dl.svc jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-5t6dl.svc] May 14 11:06:26.726: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-5t6dl.svc from pod e2e-tests-dns-5t6dl/dns-test-eb877a68-95d2-11ea-9b22-0242ac110018: the server could not find the requested resource (get pods dns-test-eb877a68-95d2-11ea-9b22-0242ac110018) May 14 11:06:26.749: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-5t6dl/dns-test-eb877a68-95d2-11ea-9b22-0242ac110018: the server could not find the requested resource (get pods dns-test-eb877a68-95d2-11ea-9b22-0242ac110018) May 14 11:06:26.752: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-5t6dl/dns-test-eb877a68-95d2-11ea-9b22-0242ac110018: the server could not find the requested resource (get pods dns-test-eb877a68-95d2-11ea-9b22-0242ac110018) May 14 11:06:26.755: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-5t6dl from pod e2e-tests-dns-5t6dl/dns-test-eb877a68-95d2-11ea-9b22-0242ac110018: the server could not find the requested resource (get pods dns-test-eb877a68-95d2-11ea-9b22-0242ac110018) May 14 11:06:26.757: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-5t6dl from pod e2e-tests-dns-5t6dl/dns-test-eb877a68-95d2-11ea-9b22-0242ac110018: the server could not find the requested resource (get pods dns-test-eb877a68-95d2-11ea-9b22-0242ac110018) May 14 11:06:26.759: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-5t6dl.svc from pod e2e-tests-dns-5t6dl/dns-test-eb877a68-95d2-11ea-9b22-0242ac110018: the server could not find the requested resource (get pods dns-test-eb877a68-95d2-11ea-9b22-0242ac110018) May 14 11:06:26.762: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-5t6dl.svc from pod e2e-tests-dns-5t6dl/dns-test-eb877a68-95d2-11ea-9b22-0242ac110018: the server could not find the requested resource (get pods dns-test-eb877a68-95d2-11ea-9b22-0242ac110018) May 14 11:06:26.764: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-5t6dl.svc from pod e2e-tests-dns-5t6dl/dns-test-eb877a68-95d2-11ea-9b22-0242ac110018: the server could not find the requested resource (get pods dns-test-eb877a68-95d2-11ea-9b22-0242ac110018) May 14 11:06:26.766: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-5t6dl.svc from pod e2e-tests-dns-5t6dl/dns-test-eb877a68-95d2-11ea-9b22-0242ac110018: the server could not find the requested resource (get pods dns-test-eb877a68-95d2-11ea-9b22-0242ac110018) May 14 11:06:26.781: INFO: Lookups using e2e-tests-dns-5t6dl/dns-test-eb877a68-95d2-11ea-9b22-0242ac110018 failed for: [wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-5t6dl.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-5t6dl jessie_tcp@dns-test-service.e2e-tests-dns-5t6dl jessie_udp@dns-test-service.e2e-tests-dns-5t6dl.svc jessie_tcp@dns-test-service.e2e-tests-dns-5t6dl.svc jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-5t6dl.svc jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-5t6dl.svc] May 14 11:06:31.719: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-5t6dl.svc from pod e2e-tests-dns-5t6dl/dns-test-eb877a68-95d2-11ea-9b22-0242ac110018: the server could not find the requested resource (get pods dns-test-eb877a68-95d2-11ea-9b22-0242ac110018) May 14 11:06:31.734: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-5t6dl/dns-test-eb877a68-95d2-11ea-9b22-0242ac110018: the server could not find the requested resource (get pods dns-test-eb877a68-95d2-11ea-9b22-0242ac110018) May 14 11:06:31.735: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-5t6dl/dns-test-eb877a68-95d2-11ea-9b22-0242ac110018: the server could not find the requested resource (get pods dns-test-eb877a68-95d2-11ea-9b22-0242ac110018) May 14 11:06:31.737: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-5t6dl from pod e2e-tests-dns-5t6dl/dns-test-eb877a68-95d2-11ea-9b22-0242ac110018: the server could not find the requested resource (get pods dns-test-eb877a68-95d2-11ea-9b22-0242ac110018) May 14 11:06:31.739: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-5t6dl from pod e2e-tests-dns-5t6dl/dns-test-eb877a68-95d2-11ea-9b22-0242ac110018: the server could not find the requested resource (get pods dns-test-eb877a68-95d2-11ea-9b22-0242ac110018) May 14 11:06:31.740: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-5t6dl.svc from pod e2e-tests-dns-5t6dl/dns-test-eb877a68-95d2-11ea-9b22-0242ac110018: the server could not find the requested resource (get pods dns-test-eb877a68-95d2-11ea-9b22-0242ac110018) May 14 11:06:31.742: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-5t6dl.svc from pod e2e-tests-dns-5t6dl/dns-test-eb877a68-95d2-11ea-9b22-0242ac110018: the server could not find the requested resource (get pods dns-test-eb877a68-95d2-11ea-9b22-0242ac110018) May 14 11:06:31.744: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-5t6dl.svc from pod e2e-tests-dns-5t6dl/dns-test-eb877a68-95d2-11ea-9b22-0242ac110018: the server could not find the requested resource (get pods dns-test-eb877a68-95d2-11ea-9b22-0242ac110018) May 14 11:06:31.747: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-5t6dl.svc from pod e2e-tests-dns-5t6dl/dns-test-eb877a68-95d2-11ea-9b22-0242ac110018: the server could not find the requested resource (get pods dns-test-eb877a68-95d2-11ea-9b22-0242ac110018) May 14 11:06:31.763: INFO: Lookups using e2e-tests-dns-5t6dl/dns-test-eb877a68-95d2-11ea-9b22-0242ac110018 failed for: [wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-5t6dl.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-5t6dl jessie_tcp@dns-test-service.e2e-tests-dns-5t6dl jessie_udp@dns-test-service.e2e-tests-dns-5t6dl.svc jessie_tcp@dns-test-service.e2e-tests-dns-5t6dl.svc jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-5t6dl.svc jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-5t6dl.svc] May 14 11:06:36.728: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-5t6dl.svc from pod e2e-tests-dns-5t6dl/dns-test-eb877a68-95d2-11ea-9b22-0242ac110018: the server could not find the requested resource (get pods dns-test-eb877a68-95d2-11ea-9b22-0242ac110018) May 14 11:06:36.753: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-5t6dl/dns-test-eb877a68-95d2-11ea-9b22-0242ac110018: the server could not find the requested resource (get pods dns-test-eb877a68-95d2-11ea-9b22-0242ac110018) May 14 11:06:36.756: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-5t6dl/dns-test-eb877a68-95d2-11ea-9b22-0242ac110018: the server could not find the requested resource (get pods dns-test-eb877a68-95d2-11ea-9b22-0242ac110018) May 14 11:06:36.760: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-5t6dl from pod e2e-tests-dns-5t6dl/dns-test-eb877a68-95d2-11ea-9b22-0242ac110018: the server could not find the requested resource (get pods dns-test-eb877a68-95d2-11ea-9b22-0242ac110018) May 14 11:06:36.763: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-5t6dl from pod e2e-tests-dns-5t6dl/dns-test-eb877a68-95d2-11ea-9b22-0242ac110018: the server could not find the requested resource (get pods dns-test-eb877a68-95d2-11ea-9b22-0242ac110018) May 14 11:06:36.766: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-5t6dl.svc from pod e2e-tests-dns-5t6dl/dns-test-eb877a68-95d2-11ea-9b22-0242ac110018: the server could not find the requested resource (get pods dns-test-eb877a68-95d2-11ea-9b22-0242ac110018) May 14 11:06:36.769: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-5t6dl.svc from pod e2e-tests-dns-5t6dl/dns-test-eb877a68-95d2-11ea-9b22-0242ac110018: the server could not find the requested resource (get pods dns-test-eb877a68-95d2-11ea-9b22-0242ac110018) May 14 11:06:36.772: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-5t6dl.svc from pod e2e-tests-dns-5t6dl/dns-test-eb877a68-95d2-11ea-9b22-0242ac110018: the server could not find the requested resource (get pods dns-test-eb877a68-95d2-11ea-9b22-0242ac110018) May 14 11:06:36.776: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-5t6dl.svc from pod e2e-tests-dns-5t6dl/dns-test-eb877a68-95d2-11ea-9b22-0242ac110018: the server could not find the requested resource (get pods dns-test-eb877a68-95d2-11ea-9b22-0242ac110018) May 14 11:06:36.796: INFO: Lookups using e2e-tests-dns-5t6dl/dns-test-eb877a68-95d2-11ea-9b22-0242ac110018 failed for: [wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-5t6dl.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-5t6dl jessie_tcp@dns-test-service.e2e-tests-dns-5t6dl jessie_udp@dns-test-service.e2e-tests-dns-5t6dl.svc jessie_tcp@dns-test-service.e2e-tests-dns-5t6dl.svc jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-5t6dl.svc jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-5t6dl.svc] May 14 11:06:41.717: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-5t6dl.svc from pod e2e-tests-dns-5t6dl/dns-test-eb877a68-95d2-11ea-9b22-0242ac110018: the server could not find the requested resource (get pods dns-test-eb877a68-95d2-11ea-9b22-0242ac110018) May 14 11:06:41.734: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-5t6dl/dns-test-eb877a68-95d2-11ea-9b22-0242ac110018: the server could not find the requested resource (get pods dns-test-eb877a68-95d2-11ea-9b22-0242ac110018) May 14 11:06:41.736: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-5t6dl/dns-test-eb877a68-95d2-11ea-9b22-0242ac110018: the server could not find the requested resource (get pods dns-test-eb877a68-95d2-11ea-9b22-0242ac110018) May 14 11:06:41.739: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-5t6dl from pod e2e-tests-dns-5t6dl/dns-test-eb877a68-95d2-11ea-9b22-0242ac110018: the server could not find the requested resource (get pods dns-test-eb877a68-95d2-11ea-9b22-0242ac110018) May 14 11:06:41.742: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-5t6dl from pod e2e-tests-dns-5t6dl/dns-test-eb877a68-95d2-11ea-9b22-0242ac110018: the server could not find the requested resource (get pods dns-test-eb877a68-95d2-11ea-9b22-0242ac110018) May 14 11:06:41.744: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-5t6dl.svc from pod e2e-tests-dns-5t6dl/dns-test-eb877a68-95d2-11ea-9b22-0242ac110018: the server could not find the requested resource (get pods dns-test-eb877a68-95d2-11ea-9b22-0242ac110018) May 14 11:06:41.747: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-5t6dl.svc from pod e2e-tests-dns-5t6dl/dns-test-eb877a68-95d2-11ea-9b22-0242ac110018: the server could not find the requested resource (get pods dns-test-eb877a68-95d2-11ea-9b22-0242ac110018) May 14 11:06:41.750: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-5t6dl.svc from pod e2e-tests-dns-5t6dl/dns-test-eb877a68-95d2-11ea-9b22-0242ac110018: the server could not find the requested resource (get pods dns-test-eb877a68-95d2-11ea-9b22-0242ac110018) May 14 11:06:41.752: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-5t6dl.svc from pod e2e-tests-dns-5t6dl/dns-test-eb877a68-95d2-11ea-9b22-0242ac110018: the server could not find the requested resource (get pods dns-test-eb877a68-95d2-11ea-9b22-0242ac110018) May 14 11:06:41.772: INFO: Lookups using e2e-tests-dns-5t6dl/dns-test-eb877a68-95d2-11ea-9b22-0242ac110018 failed for: [wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-5t6dl.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-5t6dl jessie_tcp@dns-test-service.e2e-tests-dns-5t6dl jessie_udp@dns-test-service.e2e-tests-dns-5t6dl.svc jessie_tcp@dns-test-service.e2e-tests-dns-5t6dl.svc jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-5t6dl.svc jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-5t6dl.svc] May 14 11:06:46.728: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-5t6dl.svc from pod e2e-tests-dns-5t6dl/dns-test-eb877a68-95d2-11ea-9b22-0242ac110018: the server could not find the requested resource (get pods dns-test-eb877a68-95d2-11ea-9b22-0242ac110018) May 14 11:06:46.750: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-5t6dl/dns-test-eb877a68-95d2-11ea-9b22-0242ac110018: the server could not find the requested resource (get pods dns-test-eb877a68-95d2-11ea-9b22-0242ac110018) May 14 11:06:46.753: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-5t6dl/dns-test-eb877a68-95d2-11ea-9b22-0242ac110018: the server could not find the requested resource (get pods dns-test-eb877a68-95d2-11ea-9b22-0242ac110018) May 14 11:06:46.756: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-5t6dl from pod e2e-tests-dns-5t6dl/dns-test-eb877a68-95d2-11ea-9b22-0242ac110018: the server could not find the requested resource (get pods dns-test-eb877a68-95d2-11ea-9b22-0242ac110018) May 14 11:06:46.758: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-5t6dl from pod e2e-tests-dns-5t6dl/dns-test-eb877a68-95d2-11ea-9b22-0242ac110018: the server could not find the requested resource (get pods dns-test-eb877a68-95d2-11ea-9b22-0242ac110018) May 14 11:06:46.761: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-5t6dl.svc from pod e2e-tests-dns-5t6dl/dns-test-eb877a68-95d2-11ea-9b22-0242ac110018: the server could not find the requested resource (get pods dns-test-eb877a68-95d2-11ea-9b22-0242ac110018) May 14 11:06:46.763: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-5t6dl.svc from pod e2e-tests-dns-5t6dl/dns-test-eb877a68-95d2-11ea-9b22-0242ac110018: the server could not find the requested resource (get pods dns-test-eb877a68-95d2-11ea-9b22-0242ac110018) May 14 11:06:46.766: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-5t6dl.svc from pod e2e-tests-dns-5t6dl/dns-test-eb877a68-95d2-11ea-9b22-0242ac110018: the server could not find the requested resource (get pods dns-test-eb877a68-95d2-11ea-9b22-0242ac110018) May 14 11:06:46.769: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-5t6dl.svc from pod e2e-tests-dns-5t6dl/dns-test-eb877a68-95d2-11ea-9b22-0242ac110018: the server could not find the requested resource (get pods dns-test-eb877a68-95d2-11ea-9b22-0242ac110018) May 14 11:06:46.786: INFO: Lookups using e2e-tests-dns-5t6dl/dns-test-eb877a68-95d2-11ea-9b22-0242ac110018 failed for: [wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-5t6dl.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-5t6dl jessie_tcp@dns-test-service.e2e-tests-dns-5t6dl jessie_udp@dns-test-service.e2e-tests-dns-5t6dl.svc jessie_tcp@dns-test-service.e2e-tests-dns-5t6dl.svc jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-5t6dl.svc jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-5t6dl.svc] May 14 11:06:51.787: INFO: DNS probes using e2e-tests-dns-5t6dl/dns-test-eb877a68-95d2-11ea-9b22-0242ac110018 succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 14 11:06:52.291: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-dns-5t6dl" for this suite. May 14 11:06:58.511: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 11:06:58.559: INFO: namespace: e2e-tests-dns-5t6dl, resource: bindings, ignored listing per whitelist May 14 11:06:58.590: INFO: namespace e2e-tests-dns-5t6dl deletion completed in 6.118384234s • [SLOW TEST:47.169 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 14 11:06:58.590: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0644 on node default medium May 14 11:06:58.739: INFO: Waiting up to 5m0s for pod "pod-07a86b17-95d3-11ea-9b22-0242ac110018" in namespace "e2e-tests-emptydir-l8c2f" to be "success or failure" May 14 11:06:58.742: INFO: Pod "pod-07a86b17-95d3-11ea-9b22-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 3.036122ms May 14 11:07:00.745: INFO: Pod "pod-07a86b17-95d3-11ea-9b22-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006486276s May 14 11:07:02.749: INFO: Pod "pod-07a86b17-95d3-11ea-9b22-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010406567s STEP: Saw pod success May 14 11:07:02.749: INFO: Pod "pod-07a86b17-95d3-11ea-9b22-0242ac110018" satisfied condition "success or failure" May 14 11:07:02.752: INFO: Trying to get logs from node hunter-worker2 pod pod-07a86b17-95d3-11ea-9b22-0242ac110018 container test-container: STEP: delete the pod May 14 11:07:02.778: INFO: Waiting for pod pod-07a86b17-95d3-11ea-9b22-0242ac110018 to disappear May 14 11:07:02.807: INFO: Pod pod-07a86b17-95d3-11ea-9b22-0242ac110018 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 14 11:07:02.807: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-l8c2f" for this suite. May 14 11:07:08.822: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 11:07:08.859: INFO: namespace: e2e-tests-emptydir-l8c2f, resource: bindings, ignored listing per whitelist May 14 11:07:08.897: INFO: namespace e2e-tests-emptydir-l8c2f deletion completed in 6.087381707s • [SLOW TEST:10.307 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-storage] Projected downwardAPI should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 14 11:07:08.898: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin May 14 11:07:09.139: INFO: Waiting up to 5m0s for pod "downwardapi-volume-0dd9dc52-95d3-11ea-9b22-0242ac110018" in namespace "e2e-tests-projected-z9jdn" to be "success or failure" May 14 11:07:09.161: INFO: Pod "downwardapi-volume-0dd9dc52-95d3-11ea-9b22-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 21.989291ms May 14 11:07:11.164: INFO: Pod "downwardapi-volume-0dd9dc52-95d3-11ea-9b22-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025391811s May 14 11:07:13.168: INFO: Pod "downwardapi-volume-0dd9dc52-95d3-11ea-9b22-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.028981638s STEP: Saw pod success May 14 11:07:13.168: INFO: Pod "downwardapi-volume-0dd9dc52-95d3-11ea-9b22-0242ac110018" satisfied condition "success or failure" May 14 11:07:13.170: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-0dd9dc52-95d3-11ea-9b22-0242ac110018 container client-container: STEP: delete the pod May 14 11:07:13.206: INFO: Waiting for pod downwardapi-volume-0dd9dc52-95d3-11ea-9b22-0242ac110018 to disappear May 14 11:07:13.220: INFO: Pod downwardapi-volume-0dd9dc52-95d3-11ea-9b22-0242ac110018 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 14 11:07:13.220: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-z9jdn" for this suite. May 14 11:07:19.297: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 11:07:19.311: INFO: namespace: e2e-tests-projected-z9jdn, resource: bindings, ignored listing per whitelist May 14 11:07:19.356: INFO: namespace e2e-tests-projected-z9jdn deletion completed in 6.132428662s • [SLOW TEST:10.458 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 14 11:07:19.356: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with secret that has name projected-secret-test-1403224b-95d3-11ea-9b22-0242ac110018 STEP: Creating a pod to test consume secrets May 14 11:07:19.497: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-1405cca8-95d3-11ea-9b22-0242ac110018" in namespace "e2e-tests-projected-kvxhm" to be "success or failure" May 14 11:07:19.502: INFO: Pod "pod-projected-secrets-1405cca8-95d3-11ea-9b22-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 4.559606ms May 14 11:07:21.505: INFO: Pod "pod-projected-secrets-1405cca8-95d3-11ea-9b22-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008301748s May 14 11:07:23.541: INFO: Pod "pod-projected-secrets-1405cca8-95d3-11ea-9b22-0242ac110018": Phase="Running", Reason="", readiness=true. Elapsed: 4.043862681s May 14 11:07:25.545: INFO: Pod "pod-projected-secrets-1405cca8-95d3-11ea-9b22-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.048273275s STEP: Saw pod success May 14 11:07:25.545: INFO: Pod "pod-projected-secrets-1405cca8-95d3-11ea-9b22-0242ac110018" satisfied condition "success or failure" May 14 11:07:25.549: INFO: Trying to get logs from node hunter-worker2 pod pod-projected-secrets-1405cca8-95d3-11ea-9b22-0242ac110018 container projected-secret-volume-test: STEP: delete the pod May 14 11:07:25.612: INFO: Waiting for pod pod-projected-secrets-1405cca8-95d3-11ea-9b22-0242ac110018 to disappear May 14 11:07:25.616: INFO: Pod pod-projected-secrets-1405cca8-95d3-11ea-9b22-0242ac110018 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 14 11:07:25.616: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-kvxhm" for this suite. May 14 11:07:31.647: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 11:07:31.671: INFO: namespace: e2e-tests-projected-kvxhm, resource: bindings, ignored listing per whitelist May 14 11:07:31.712: INFO: namespace e2e-tests-projected-kvxhm deletion completed in 6.092811928s • [SLOW TEST:12.356 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run default should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 14 11:07:31.713: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1262 [It] should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine May 14 11:07:31.808: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --namespace=e2e-tests-kubectl-5x9m8' May 14 11:07:31.911: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" May 14 11:07:31.911: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n" STEP: verifying the pod controlled by e2e-test-nginx-deployment gets created [AfterEach] [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1268 May 14 11:07:33.979: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=e2e-tests-kubectl-5x9m8' May 14 11:07:34.514: INFO: stderr: "" May 14 11:07:34.515: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 14 11:07:34.515: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-5x9m8" for this suite. May 14 11:07:56.777: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 11:07:56.814: INFO: namespace: e2e-tests-kubectl-5x9m8, resource: bindings, ignored listing per whitelist May 14 11:07:56.841: INFO: namespace e2e-tests-kubectl-5x9m8 deletion completed in 22.323399208s • [SLOW TEST:25.128 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 14 11:07:56.841: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: validating cluster-info May 14 11:07:56.946: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config cluster-info' May 14 11:07:59.512: INFO: stderr: "" May 14 11:07:59.512: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32768\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32768/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 14 11:07:59.512: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-xd8ht" for this suite. May 14 11:08:05.559: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 11:08:05.581: INFO: namespace: e2e-tests-kubectl-xd8ht, resource: bindings, ignored listing per whitelist May 14 11:08:05.646: INFO: namespace e2e-tests-kubectl-xd8ht deletion completed in 6.130184689s • [SLOW TEST:8.805 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl cluster-info /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-apps] ReplicationController should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 14 11:08:05.647: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Given a ReplicationController is created STEP: When the matched label of one of its pods change May 14 11:08:05.765: INFO: Pod name pod-release: Found 0 pods out of 1 May 14 11:08:10.767: INFO: Pod name pod-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 14 11:08:11.804: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-replication-controller-pbgz5" for this suite. May 14 11:08:17.950: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 11:08:17.980: INFO: namespace: e2e-tests-replication-controller-pbgz5, resource: bindings, ignored listing per whitelist May 14 11:08:18.023: INFO: namespace e2e-tests-replication-controller-pbgz5 deletion completed in 6.215830237s • [SLOW TEST:12.376 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 14 11:08:18.023: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin May 14 11:08:18.228: INFO: Waiting up to 5m0s for pod "downwardapi-volume-3707837e-95d3-11ea-9b22-0242ac110018" in namespace "e2e-tests-projected-c794x" to be "success or failure" May 14 11:08:18.238: INFO: Pod "downwardapi-volume-3707837e-95d3-11ea-9b22-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 10.495373ms May 14 11:08:20.243: INFO: Pod "downwardapi-volume-3707837e-95d3-11ea-9b22-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014872922s May 14 11:08:22.247: INFO: Pod "downwardapi-volume-3707837e-95d3-11ea-9b22-0242ac110018": Phase="Running", Reason="", readiness=true. Elapsed: 4.018787103s May 14 11:08:24.250: INFO: Pod "downwardapi-volume-3707837e-95d3-11ea-9b22-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.022495476s STEP: Saw pod success May 14 11:08:24.250: INFO: Pod "downwardapi-volume-3707837e-95d3-11ea-9b22-0242ac110018" satisfied condition "success or failure" May 14 11:08:24.253: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-3707837e-95d3-11ea-9b22-0242ac110018 container client-container: STEP: delete the pod May 14 11:08:24.275: INFO: Waiting for pod downwardapi-volume-3707837e-95d3-11ea-9b22-0242ac110018 to disappear May 14 11:08:24.311: INFO: Pod downwardapi-volume-3707837e-95d3-11ea-9b22-0242ac110018 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 14 11:08:24.312: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-c794x" for this suite. May 14 11:08:30.414: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 11:08:30.485: INFO: namespace: e2e-tests-projected-c794x, resource: bindings, ignored listing per whitelist May 14 11:08:30.490: INFO: namespace e2e-tests-projected-c794x deletion completed in 6.117244134s • [SLOW TEST:12.467 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 14 11:08:30.491: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43 [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod May 14 11:08:30.667: INFO: PodSpec: initContainers in spec.initContainers May 14 11:09:23.829: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-3e747262-95d3-11ea-9b22-0242ac110018", GenerateName:"", Namespace:"e2e-tests-init-container-p6l5b", SelfLink:"/api/v1/namespaces/e2e-tests-init-container-p6l5b/pods/pod-init-3e747262-95d3-11ea-9b22-0242ac110018", UID:"3e74f709-95d3-11ea-99e8-0242ac110002", ResourceVersion:"10517758", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63725051310, loc:(*time.Location)(0x7950ac0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"667848670"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-jqkxw", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc001cb1d40), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-jqkxw", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-jqkxw", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-jqkxw", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc002100a38), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"hunter-worker", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc001b13d40), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc002100da0)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc002100e10)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc002100e18), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc002100e1c)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725051310, loc:(*time.Location)(0x7950ac0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725051310, loc:(*time.Location)(0x7950ac0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725051310, loc:(*time.Location)(0x7950ac0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725051310, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.17.0.3", PodIP:"10.244.1.238", StartTime:(*v1.Time)(0xc000b0aba0), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(0xc000b0abe0), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc0012db9d0)}, Ready:false, RestartCount:3, Image:"docker.io/library/busybox:1.29", ImageID:"docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"containerd://3df09078e0a00d33661121e545a283d1034838ccc5fa2d70e0d353434329b97a"}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc000b0ac00), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:""}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc000b0abc0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:""}}, QOSClass:"Guaranteed"}} [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 14 11:09:23.830: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-init-container-p6l5b" for this suite. May 14 11:09:46.213: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 11:09:46.223: INFO: namespace: e2e-tests-init-container-p6l5b, resource: bindings, ignored listing per whitelist May 14 11:09:46.276: INFO: namespace e2e-tests-init-container-p6l5b deletion completed in 22.406173245s • [SLOW TEST:75.786 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 14 11:09:46.277: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook May 14 11:09:54.439: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 14 11:09:54.449: INFO: Pod pod-with-prestop-http-hook still exists May 14 11:09:56.449: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 14 11:09:56.453: INFO: Pod pod-with-prestop-http-hook still exists May 14 11:09:58.449: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 14 11:09:58.452: INFO: Pod pod-with-prestop-http-hook still exists May 14 11:10:00.449: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 14 11:10:00.452: INFO: Pod pod-with-prestop-http-hook still exists May 14 11:10:02.449: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 14 11:10:02.461: INFO: Pod pod-with-prestop-http-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 14 11:10:02.468: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-tct57" for this suite. May 14 11:10:24.491: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 11:10:24.501: INFO: namespace: e2e-tests-container-lifecycle-hook-tct57, resource: bindings, ignored listing per whitelist May 14 11:10:24.568: INFO: namespace e2e-tests-container-lifecycle-hook-tct57 deletion completed in 22.096468992s • [SLOW TEST:38.292 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40 should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 14 11:10:24.568: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin May 14 11:10:24.696: INFO: Waiting up to 5m0s for pod "downwardapi-volume-8264851a-95d3-11ea-9b22-0242ac110018" in namespace "e2e-tests-downward-api-2pw55" to be "success or failure" May 14 11:10:24.718: INFO: Pod "downwardapi-volume-8264851a-95d3-11ea-9b22-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 22.346961ms May 14 11:10:26.723: INFO: Pod "downwardapi-volume-8264851a-95d3-11ea-9b22-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026473498s May 14 11:10:28.727: INFO: Pod "downwardapi-volume-8264851a-95d3-11ea-9b22-0242ac110018": Phase="Running", Reason="", readiness=true. Elapsed: 4.03094499s May 14 11:10:30.732: INFO: Pod "downwardapi-volume-8264851a-95d3-11ea-9b22-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.035487871s STEP: Saw pod success May 14 11:10:30.732: INFO: Pod "downwardapi-volume-8264851a-95d3-11ea-9b22-0242ac110018" satisfied condition "success or failure" May 14 11:10:30.735: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-8264851a-95d3-11ea-9b22-0242ac110018 container client-container: STEP: delete the pod May 14 11:10:30.762: INFO: Waiting for pod downwardapi-volume-8264851a-95d3-11ea-9b22-0242ac110018 to disappear May 14 11:10:30.766: INFO: Pod downwardapi-volume-8264851a-95d3-11ea-9b22-0242ac110018 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 14 11:10:30.766: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-2pw55" for this suite. May 14 11:10:36.982: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 11:10:37.016: INFO: namespace: e2e-tests-downward-api-2pw55, resource: bindings, ignored listing per whitelist May 14 11:10:37.049: INFO: namespace e2e-tests-downward-api-2pw55 deletion completed in 6.280705147s • [SLOW TEST:12.481 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 14 11:10:37.050: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79 May 14 11:10:37.349: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 14 11:10:37.435: INFO: Waiting for terminating namespaces to be deleted... May 14 11:10:37.437: INFO: Logging pods the kubelet thinks is on node hunter-worker before test May 14 11:10:37.442: INFO: kindnet-54h7m from kube-system started at 2020-03-15 18:23:12 +0000 UTC (1 container statuses recorded) May 14 11:10:37.442: INFO: Container kindnet-cni ready: true, restart count 0 May 14 11:10:37.442: INFO: coredns-54ff9cd656-4h7lb from kube-system started at 2020-03-15 18:23:32 +0000 UTC (1 container statuses recorded) May 14 11:10:37.442: INFO: Container coredns ready: true, restart count 0 May 14 11:10:37.442: INFO: kube-proxy-szbng from kube-system started at 2020-03-15 18:23:11 +0000 UTC (1 container statuses recorded) May 14 11:10:37.442: INFO: Container kube-proxy ready: true, restart count 0 May 14 11:10:37.442: INFO: Logging pods the kubelet thinks is on node hunter-worker2 before test May 14 11:10:37.446: INFO: kindnet-mtqrs from kube-system started at 2020-03-15 18:23:12 +0000 UTC (1 container statuses recorded) May 14 11:10:37.446: INFO: Container kindnet-cni ready: true, restart count 0 May 14 11:10:37.446: INFO: coredns-54ff9cd656-8vrkk from kube-system started at 2020-03-15 18:23:32 +0000 UTC (1 container statuses recorded) May 14 11:10:37.446: INFO: Container coredns ready: true, restart count 0 May 14 11:10:37.446: INFO: kube-proxy-s52ll from kube-system started at 2020-03-15 18:23:12 +0000 UTC (1 container statuses recorded) May 14 11:10:37.446: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.160ee0924c1b74e9], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match node selector.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 14 11:10:38.458: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-sched-pred-nk6kd" for this suite. May 14 11:10:44.503: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 11:10:44.575: INFO: namespace: e2e-tests-sched-pred-nk6kd, resource: bindings, ignored listing per whitelist May 14 11:10:44.578: INFO: namespace e2e-tests-sched-pred-nk6kd deletion completed in 6.116253472s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70 • [SLOW TEST:7.529 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22 validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 14 11:10:44.579: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with secret that has name projected-secret-test-map-8e535c94-95d3-11ea-9b22-0242ac110018 STEP: Creating a pod to test consume secrets May 14 11:10:44.752: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-8e574cdf-95d3-11ea-9b22-0242ac110018" in namespace "e2e-tests-projected-v4vnv" to be "success or failure" May 14 11:10:44.765: INFO: Pod "pod-projected-secrets-8e574cdf-95d3-11ea-9b22-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 13.542077ms May 14 11:10:46.769: INFO: Pod "pod-projected-secrets-8e574cdf-95d3-11ea-9b22-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017281964s May 14 11:10:48.917: INFO: Pod "pod-projected-secrets-8e574cdf-95d3-11ea-9b22-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 4.165127706s May 14 11:10:50.920: INFO: Pod "pod-projected-secrets-8e574cdf-95d3-11ea-9b22-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.168458967s STEP: Saw pod success May 14 11:10:50.920: INFO: Pod "pod-projected-secrets-8e574cdf-95d3-11ea-9b22-0242ac110018" satisfied condition "success or failure" May 14 11:10:50.923: INFO: Trying to get logs from node hunter-worker pod pod-projected-secrets-8e574cdf-95d3-11ea-9b22-0242ac110018 container projected-secret-volume-test: STEP: delete the pod May 14 11:10:51.263: INFO: Waiting for pod pod-projected-secrets-8e574cdf-95d3-11ea-9b22-0242ac110018 to disappear May 14 11:10:51.265: INFO: Pod pod-projected-secrets-8e574cdf-95d3-11ea-9b22-0242ac110018 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 14 11:10:51.265: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-v4vnv" for this suite. May 14 11:10:57.278: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 11:10:57.387: INFO: namespace: e2e-tests-projected-v4vnv, resource: bindings, ignored listing per whitelist May 14 11:10:57.411: INFO: namespace e2e-tests-projected-v4vnv deletion completed in 6.143834505s • [SLOW TEST:12.832 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 14 11:10:57.411: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin May 14 11:10:57.540: INFO: Waiting up to 5m0s for pod "downwardapi-volume-95fd595c-95d3-11ea-9b22-0242ac110018" in namespace "e2e-tests-downward-api-j9947" to be "success or failure" May 14 11:10:57.558: INFO: Pod "downwardapi-volume-95fd595c-95d3-11ea-9b22-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 17.331528ms May 14 11:10:59.792: INFO: Pod "downwardapi-volume-95fd595c-95d3-11ea-9b22-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.251170605s May 14 11:11:01.806: INFO: Pod "downwardapi-volume-95fd595c-95d3-11ea-9b22-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 4.26596229s May 14 11:11:03.809: INFO: Pod "downwardapi-volume-95fd595c-95d3-11ea-9b22-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.268398737s STEP: Saw pod success May 14 11:11:03.809: INFO: Pod "downwardapi-volume-95fd595c-95d3-11ea-9b22-0242ac110018" satisfied condition "success or failure" May 14 11:11:03.811: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-95fd595c-95d3-11ea-9b22-0242ac110018 container client-container: STEP: delete the pod May 14 11:11:03.833: INFO: Waiting for pod downwardapi-volume-95fd595c-95d3-11ea-9b22-0242ac110018 to disappear May 14 11:11:03.838: INFO: Pod downwardapi-volume-95fd595c-95d3-11ea-9b22-0242ac110018 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 14 11:11:03.838: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-j9947" for this suite. May 14 11:11:09.872: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 11:11:09.918: INFO: namespace: e2e-tests-downward-api-j9947, resource: bindings, ignored listing per whitelist May 14 11:11:09.946: INFO: namespace e2e-tests-downward-api-j9947 deletion completed in 6.104309753s • [SLOW TEST:12.535 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 14 11:11:09.946: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test substitution in container's args May 14 11:11:10.026: INFO: Waiting up to 5m0s for pod "var-expansion-9d6fa562-95d3-11ea-9b22-0242ac110018" in namespace "e2e-tests-var-expansion-mqq9r" to be "success or failure" May 14 11:11:10.078: INFO: Pod "var-expansion-9d6fa562-95d3-11ea-9b22-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 52.176015ms May 14 11:11:12.082: INFO: Pod "var-expansion-9d6fa562-95d3-11ea-9b22-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.056219471s May 14 11:11:14.087: INFO: Pod "var-expansion-9d6fa562-95d3-11ea-9b22-0242ac110018": Phase="Running", Reason="", readiness=true. Elapsed: 4.060708311s May 14 11:11:16.091: INFO: Pod "var-expansion-9d6fa562-95d3-11ea-9b22-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.064808792s STEP: Saw pod success May 14 11:11:16.091: INFO: Pod "var-expansion-9d6fa562-95d3-11ea-9b22-0242ac110018" satisfied condition "success or failure" May 14 11:11:16.094: INFO: Trying to get logs from node hunter-worker2 pod var-expansion-9d6fa562-95d3-11ea-9b22-0242ac110018 container dapi-container: STEP: delete the pod May 14 11:11:16.279: INFO: Waiting for pod var-expansion-9d6fa562-95d3-11ea-9b22-0242ac110018 to disappear May 14 11:11:16.359: INFO: Pod var-expansion-9d6fa562-95d3-11ea-9b22-0242ac110018 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 14 11:11:16.359: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-var-expansion-mqq9r" for this suite. May 14 11:11:22.479: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 11:11:22.523: INFO: namespace: e2e-tests-var-expansion-mqq9r, resource: bindings, ignored listing per whitelist May 14 11:11:22.548: INFO: namespace e2e-tests-var-expansion-mqq9r deletion completed in 6.186662078s • [SLOW TEST:12.603 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 14 11:11:22.549: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-a4f64f6b-95d3-11ea-9b22-0242ac110018 STEP: Creating a pod to test consume configMaps May 14 11:11:22.678: INFO: Waiting up to 5m0s for pod "pod-configmaps-a4f81f67-95d3-11ea-9b22-0242ac110018" in namespace "e2e-tests-configmap-cmdbc" to be "success or failure" May 14 11:11:22.701: INFO: Pod "pod-configmaps-a4f81f67-95d3-11ea-9b22-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 22.665428ms May 14 11:11:24.712: INFO: Pod "pod-configmaps-a4f81f67-95d3-11ea-9b22-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034049154s May 14 11:11:26.767: INFO: Pod "pod-configmaps-a4f81f67-95d3-11ea-9b22-0242ac110018": Phase="Running", Reason="", readiness=true. Elapsed: 4.088660578s May 14 11:11:28.770: INFO: Pod "pod-configmaps-a4f81f67-95d3-11ea-9b22-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.091888359s STEP: Saw pod success May 14 11:11:28.770: INFO: Pod "pod-configmaps-a4f81f67-95d3-11ea-9b22-0242ac110018" satisfied condition "success or failure" May 14 11:11:28.773: INFO: Trying to get logs from node hunter-worker2 pod pod-configmaps-a4f81f67-95d3-11ea-9b22-0242ac110018 container configmap-volume-test: STEP: delete the pod May 14 11:11:28.804: INFO: Waiting for pod pod-configmaps-a4f81f67-95d3-11ea-9b22-0242ac110018 to disappear May 14 11:11:28.821: INFO: Pod pod-configmaps-a4f81f67-95d3-11ea-9b22-0242ac110018 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 14 11:11:28.821: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-cmdbc" for this suite. May 14 11:11:36.915: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 11:11:36.995: INFO: namespace: e2e-tests-configmap-cmdbc, resource: bindings, ignored listing per whitelist May 14 11:11:37.078: INFO: namespace e2e-tests-configmap-cmdbc deletion completed in 8.254229528s • [SLOW TEST:14.529 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 14 11:11:37.078: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-h9hbn STEP: creating a selector STEP: Creating the service pods in kubernetes May 14 11:11:37.452: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods May 14 11:12:03.706: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.1.243:8080/hostName | grep -v '^\s*$'] Namespace:e2e-tests-pod-network-test-h9hbn PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 14 11:12:03.706: INFO: >>> kubeConfig: /root/.kube/config I0514 11:12:03.733267 6 log.go:172] (0xc0005e7810) (0xc00223c3c0) Create stream I0514 11:12:03.733298 6 log.go:172] (0xc0005e7810) (0xc00223c3c0) Stream added, broadcasting: 1 I0514 11:12:03.734762 6 log.go:172] (0xc0005e7810) Reply frame received for 1 I0514 11:12:03.734791 6 log.go:172] (0xc0005e7810) (0xc0021a5ea0) Create stream I0514 11:12:03.734801 6 log.go:172] (0xc0005e7810) (0xc0021a5ea0) Stream added, broadcasting: 3 I0514 11:12:03.735560 6 log.go:172] (0xc0005e7810) Reply frame received for 3 I0514 11:12:03.735598 6 log.go:172] (0xc0005e7810) (0xc0021d7720) Create stream I0514 11:12:03.735613 6 log.go:172] (0xc0005e7810) (0xc0021d7720) Stream added, broadcasting: 5 I0514 11:12:03.736346 6 log.go:172] (0xc0005e7810) Reply frame received for 5 I0514 11:12:03.837927 6 log.go:172] (0xc0005e7810) Data frame received for 3 I0514 11:12:03.837972 6 log.go:172] (0xc0021a5ea0) (3) Data frame handling I0514 11:12:03.838002 6 log.go:172] (0xc0021a5ea0) (3) Data frame sent I0514 11:12:03.838919 6 log.go:172] (0xc0005e7810) Data frame received for 5 I0514 11:12:03.838942 6 log.go:172] (0xc0021d7720) (5) Data frame handling I0514 11:12:03.839012 6 log.go:172] (0xc0005e7810) Data frame received for 3 I0514 11:12:03.839025 6 log.go:172] (0xc0021a5ea0) (3) Data frame handling I0514 11:12:03.840829 6 log.go:172] (0xc0005e7810) Data frame received for 1 I0514 11:12:03.840857 6 log.go:172] (0xc00223c3c0) (1) Data frame handling I0514 11:12:03.840894 6 log.go:172] (0xc00223c3c0) (1) Data frame sent I0514 11:12:03.840913 6 log.go:172] (0xc0005e7810) (0xc00223c3c0) Stream removed, broadcasting: 1 I0514 11:12:03.840929 6 log.go:172] (0xc0005e7810) Go away received I0514 11:12:03.841486 6 log.go:172] (0xc0005e7810) (0xc00223c3c0) Stream removed, broadcasting: 1 I0514 11:12:03.841505 6 log.go:172] (0xc0005e7810) (0xc0021a5ea0) Stream removed, broadcasting: 3 I0514 11:12:03.841514 6 log.go:172] (0xc0005e7810) (0xc0021d7720) Stream removed, broadcasting: 5 May 14 11:12:03.841: INFO: Found all expected endpoints: [netserver-0] May 14 11:12:03.844: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.2.99:8080/hostName | grep -v '^\s*$'] Namespace:e2e-tests-pod-network-test-h9hbn PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 14 11:12:03.844: INFO: >>> kubeConfig: /root/.kube/config I0514 11:12:03.868363 6 log.go:172] (0xc00126a2c0) (0xc0021d7a40) Create stream I0514 11:12:03.868388 6 log.go:172] (0xc00126a2c0) (0xc0021d7a40) Stream added, broadcasting: 1 I0514 11:12:03.870630 6 log.go:172] (0xc00126a2c0) Reply frame received for 1 I0514 11:12:03.870657 6 log.go:172] (0xc00126a2c0) (0xc0008e7540) Create stream I0514 11:12:03.870665 6 log.go:172] (0xc00126a2c0) (0xc0008e7540) Stream added, broadcasting: 3 I0514 11:12:03.871561 6 log.go:172] (0xc00126a2c0) Reply frame received for 3 I0514 11:12:03.871598 6 log.go:172] (0xc00126a2c0) (0xc0021d7ae0) Create stream I0514 11:12:03.871612 6 log.go:172] (0xc00126a2c0) (0xc0021d7ae0) Stream added, broadcasting: 5 I0514 11:12:03.872635 6 log.go:172] (0xc00126a2c0) Reply frame received for 5 I0514 11:12:03.948275 6 log.go:172] (0xc00126a2c0) Data frame received for 3 I0514 11:12:03.948308 6 log.go:172] (0xc0008e7540) (3) Data frame handling I0514 11:12:03.948328 6 log.go:172] (0xc0008e7540) (3) Data frame sent I0514 11:12:03.948356 6 log.go:172] (0xc00126a2c0) Data frame received for 3 I0514 11:12:03.948382 6 log.go:172] (0xc0008e7540) (3) Data frame handling I0514 11:12:03.948690 6 log.go:172] (0xc00126a2c0) Data frame received for 5 I0514 11:12:03.948705 6 log.go:172] (0xc0021d7ae0) (5) Data frame handling I0514 11:12:03.950234 6 log.go:172] (0xc00126a2c0) Data frame received for 1 I0514 11:12:03.950252 6 log.go:172] (0xc0021d7a40) (1) Data frame handling I0514 11:12:03.950262 6 log.go:172] (0xc0021d7a40) (1) Data frame sent I0514 11:12:03.950277 6 log.go:172] (0xc00126a2c0) (0xc0021d7a40) Stream removed, broadcasting: 1 I0514 11:12:03.950310 6 log.go:172] (0xc00126a2c0) Go away received I0514 11:12:03.950499 6 log.go:172] (0xc00126a2c0) (0xc0021d7a40) Stream removed, broadcasting: 1 I0514 11:12:03.950515 6 log.go:172] (0xc00126a2c0) (0xc0008e7540) Stream removed, broadcasting: 3 I0514 11:12:03.950522 6 log.go:172] (0xc00126a2c0) (0xc0021d7ae0) Stream removed, broadcasting: 5 May 14 11:12:03.950: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 14 11:12:03.950: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pod-network-test-h9hbn" for this suite. May 14 11:12:27.994: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 11:12:28.012: INFO: namespace: e2e-tests-pod-network-test-h9hbn, resource: bindings, ignored listing per whitelist May 14 11:12:28.072: INFO: namespace e2e-tests-pod-network-test-h9hbn deletion completed in 24.118468575s • [SLOW TEST:50.994 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for node-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-network] Service endpoints latency should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 14 11:12:28.073: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svc-latency STEP: Waiting for a default service account to be provisioned in namespace [It] should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating replication controller svc-latency-rc in namespace e2e-tests-svc-latency-fw5dn I0514 11:12:28.917087 6 runners.go:184] Created replication controller with name: svc-latency-rc, namespace: e2e-tests-svc-latency-fw5dn, replica count: 1 I0514 11:12:29.967663 6 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0514 11:12:30.967918 6 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0514 11:12:31.968157 6 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0514 11:12:32.968382 6 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0514 11:12:33.968603 6 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0514 11:12:34.968814 6 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 14 11:12:35.128: INFO: Created: latency-svc-77mzz May 14 11:12:35.143: INFO: Got endpoints: latency-svc-77mzz [73.967013ms] May 14 11:12:35.324: INFO: Created: latency-svc-r6qdm May 14 11:12:35.376: INFO: Got endpoints: latency-svc-r6qdm [233.227233ms] May 14 11:12:35.380: INFO: Created: latency-svc-kjlzh May 14 11:12:35.491: INFO: Got endpoints: latency-svc-kjlzh [348.240163ms] May 14 11:12:35.507: INFO: Created: latency-svc-zr2zk May 14 11:12:35.524: INFO: Got endpoints: latency-svc-zr2zk [380.799397ms] May 14 11:12:35.629: INFO: Created: latency-svc-pv6wr May 14 11:12:35.634: INFO: Got endpoints: latency-svc-pv6wr [491.092906ms] May 14 11:12:35.669: INFO: Created: latency-svc-l85tl May 14 11:12:35.675: INFO: Got endpoints: latency-svc-l85tl [532.195622ms] May 14 11:12:35.705: INFO: Created: latency-svc-8c42r May 14 11:12:35.711: INFO: Got endpoints: latency-svc-8c42r [568.065036ms] May 14 11:12:35.779: INFO: Created: latency-svc-52hs6 May 14 11:12:35.782: INFO: Got endpoints: latency-svc-52hs6 [639.018569ms] May 14 11:12:35.820: INFO: Created: latency-svc-7b5zj May 14 11:12:35.832: INFO: Got endpoints: latency-svc-7b5zj [689.063695ms] May 14 11:12:35.855: INFO: Created: latency-svc-qgq8t May 14 11:12:35.922: INFO: Got endpoints: latency-svc-qgq8t [779.350534ms] May 14 11:12:35.963: INFO: Created: latency-svc-bgmnc May 14 11:12:35.976: INFO: Got endpoints: latency-svc-bgmnc [833.43606ms] May 14 11:12:36.005: INFO: Created: latency-svc-rlbtq May 14 11:12:36.042: INFO: Got endpoints: latency-svc-rlbtq [899.043468ms] May 14 11:12:36.065: INFO: Created: latency-svc-q459l May 14 11:12:36.079: INFO: Got endpoints: latency-svc-q459l [936.381483ms] May 14 11:12:36.123: INFO: Created: latency-svc-cpfs6 May 14 11:12:36.192: INFO: Got endpoints: latency-svc-cpfs6 [1.048535065s] May 14 11:12:36.204: INFO: Created: latency-svc-5cvz9 May 14 11:12:36.230: INFO: Got endpoints: latency-svc-5cvz9 [1.087104962s] May 14 11:12:36.257: INFO: Created: latency-svc-tlg24 May 14 11:12:36.266: INFO: Got endpoints: latency-svc-tlg24 [1.123245142s] May 14 11:12:36.330: INFO: Created: latency-svc-kkvjq May 14 11:12:36.333: INFO: Got endpoints: latency-svc-kkvjq [956.90804ms] May 14 11:12:36.371: INFO: Created: latency-svc-ntf7d May 14 11:12:36.388: INFO: Got endpoints: latency-svc-ntf7d [896.392017ms] May 14 11:12:36.407: INFO: Created: latency-svc-48jjm May 14 11:12:36.473: INFO: Got endpoints: latency-svc-48jjm [949.559878ms] May 14 11:12:36.479: INFO: Created: latency-svc-8s8s6 May 14 11:12:36.496: INFO: Got endpoints: latency-svc-8s8s6 [862.125901ms] May 14 11:12:36.539: INFO: Created: latency-svc-wlnkd May 14 11:12:36.556: INFO: Got endpoints: latency-svc-wlnkd [881.228187ms] May 14 11:12:36.611: INFO: Created: latency-svc-5j798 May 14 11:12:36.623: INFO: Got endpoints: latency-svc-5j798 [911.574575ms] May 14 11:12:36.683: INFO: Created: latency-svc-tthpq May 14 11:12:36.701: INFO: Got endpoints: latency-svc-tthpq [919.400879ms] May 14 11:12:36.761: INFO: Created: latency-svc-pzbgv May 14 11:12:36.764: INFO: Got endpoints: latency-svc-pzbgv [931.674406ms] May 14 11:12:36.827: INFO: Created: latency-svc-qzqfb May 14 11:12:36.847: INFO: Got endpoints: latency-svc-qzqfb [924.707862ms] May 14 11:12:36.917: INFO: Created: latency-svc-sfz6q May 14 11:12:36.948: INFO: Got endpoints: latency-svc-sfz6q [971.035019ms] May 14 11:12:36.948: INFO: Created: latency-svc-xtd5p May 14 11:12:36.960: INFO: Got endpoints: latency-svc-xtd5p [918.105677ms] May 14 11:12:36.995: INFO: Created: latency-svc-zfgb4 May 14 11:12:37.066: INFO: Got endpoints: latency-svc-zfgb4 [986.500034ms] May 14 11:12:37.078: INFO: Created: latency-svc-4sqlc May 14 11:12:37.093: INFO: Got endpoints: latency-svc-4sqlc [901.231899ms] May 14 11:12:37.115: INFO: Created: latency-svc-x4ff4 May 14 11:12:37.136: INFO: Got endpoints: latency-svc-x4ff4 [905.50817ms] May 14 11:12:37.163: INFO: Created: latency-svc-rq74j May 14 11:12:37.226: INFO: Got endpoints: latency-svc-rq74j [959.520488ms] May 14 11:12:37.283: INFO: Created: latency-svc-wgxth May 14 11:12:37.361: INFO: Got endpoints: latency-svc-wgxth [1.028099283s] May 14 11:12:37.363: INFO: Created: latency-svc-qwwq5 May 14 11:12:37.383: INFO: Got endpoints: latency-svc-qwwq5 [995.227958ms] May 14 11:12:37.445: INFO: Created: latency-svc-hv6tb May 14 11:12:37.509: INFO: Got endpoints: latency-svc-hv6tb [1.035470407s] May 14 11:12:37.553: INFO: Created: latency-svc-hsrfr May 14 11:12:37.569: INFO: Got endpoints: latency-svc-hsrfr [1.073004116s] May 14 11:12:37.666: INFO: Created: latency-svc-fp2b8 May 14 11:12:37.668: INFO: Got endpoints: latency-svc-fp2b8 [1.111662889s] May 14 11:12:37.697: INFO: Created: latency-svc-l997x May 14 11:12:37.706: INFO: Got endpoints: latency-svc-l997x [1.083358485s] May 14 11:12:37.727: INFO: Created: latency-svc-7dlnl May 14 11:12:37.737: INFO: Got endpoints: latency-svc-7dlnl [1.035258492s] May 14 11:12:37.821: INFO: Created: latency-svc-l8cfw May 14 11:12:37.823: INFO: Got endpoints: latency-svc-l8cfw [1.059518387s] May 14 11:12:37.888: INFO: Created: latency-svc-m44r7 May 14 11:12:37.908: INFO: Got endpoints: latency-svc-m44r7 [1.060715084s] May 14 11:12:37.976: INFO: Created: latency-svc-qmvtg May 14 11:12:37.980: INFO: Got endpoints: latency-svc-qmvtg [1.032005227s] May 14 11:12:38.008: INFO: Created: latency-svc-gp28k May 14 11:12:38.026: INFO: Got endpoints: latency-svc-gp28k [1.065648719s] May 14 11:12:38.068: INFO: Created: latency-svc-cmkv8 May 14 11:12:38.108: INFO: Got endpoints: latency-svc-cmkv8 [1.042121626s] May 14 11:12:38.140: INFO: Created: latency-svc-q265b May 14 11:12:38.153: INFO: Got endpoints: latency-svc-q265b [1.059972048s] May 14 11:12:38.195: INFO: Created: latency-svc-rfnxj May 14 11:12:38.257: INFO: Got endpoints: latency-svc-rfnxj [1.121458365s] May 14 11:12:38.280: INFO: Created: latency-svc-9mhnh May 14 11:12:38.297: INFO: Got endpoints: latency-svc-9mhnh [1.071486587s] May 14 11:12:38.326: INFO: Created: latency-svc-ccwzj May 14 11:12:38.340: INFO: Got endpoints: latency-svc-ccwzj [978.546429ms] May 14 11:12:38.426: INFO: Created: latency-svc-w4tnf May 14 11:12:38.464: INFO: Got endpoints: latency-svc-w4tnf [1.081087836s] May 14 11:12:38.464: INFO: Created: latency-svc-qptfk May 14 11:12:38.478: INFO: Got endpoints: latency-svc-qptfk [969.369724ms] May 14 11:12:38.500: INFO: Created: latency-svc-rsntq May 14 11:12:38.515: INFO: Got endpoints: latency-svc-rsntq [945.533152ms] May 14 11:12:38.605: INFO: Created: latency-svc-scnfn May 14 11:12:38.614: INFO: Got endpoints: latency-svc-scnfn [945.711692ms] May 14 11:12:38.651: INFO: Created: latency-svc-ztqlg May 14 11:12:38.659: INFO: Got endpoints: latency-svc-ztqlg [953.400067ms] May 14 11:12:38.680: INFO: Created: latency-svc-dggj4 May 14 11:12:38.702: INFO: Got endpoints: latency-svc-dggj4 [965.368098ms] May 14 11:12:38.790: INFO: Created: latency-svc-lnsq2 May 14 11:12:38.811: INFO: Got endpoints: latency-svc-lnsq2 [987.588029ms] May 14 11:12:38.831: INFO: Created: latency-svc-v984d May 14 11:12:38.880: INFO: Got endpoints: latency-svc-v984d [972.357505ms] May 14 11:12:38.897: INFO: Created: latency-svc-6mvs5 May 14 11:12:38.912: INFO: Got endpoints: latency-svc-6mvs5 [932.160192ms] May 14 11:12:40.995: INFO: Created: latency-svc-zcrs4 May 14 11:12:40.999: INFO: Got endpoints: latency-svc-zcrs4 [2.972576449s] May 14 11:12:41.048: INFO: Created: latency-svc-sj884 May 14 11:12:41.076: INFO: Got endpoints: latency-svc-sj884 [2.967495265s] May 14 11:12:41.131: INFO: Created: latency-svc-4wnz4 May 14 11:12:41.137: INFO: Got endpoints: latency-svc-4wnz4 [2.98447558s] May 14 11:12:41.216: INFO: Created: latency-svc-49cg7 May 14 11:12:41.270: INFO: Got endpoints: latency-svc-49cg7 [3.012464733s] May 14 11:12:41.312: INFO: Created: latency-svc-tqpwz May 14 11:12:41.359: INFO: Got endpoints: latency-svc-tqpwz [3.061605658s] May 14 11:12:41.437: INFO: Created: latency-svc-6s777 May 14 11:12:41.443: INFO: Got endpoints: latency-svc-6s777 [3.102909373s] May 14 11:12:41.474: INFO: Created: latency-svc-tr56g May 14 11:12:41.509: INFO: Got endpoints: latency-svc-tr56g [3.044974764s] May 14 11:12:41.608: INFO: Created: latency-svc-zpwpj May 14 11:12:41.611: INFO: Got endpoints: latency-svc-zpwpj [3.132280404s] May 14 11:12:41.642: INFO: Created: latency-svc-7xp2j May 14 11:12:41.660: INFO: Got endpoints: latency-svc-7xp2j [3.144914793s] May 14 11:12:41.684: INFO: Created: latency-svc-glrfp May 14 11:12:41.697: INFO: Got endpoints: latency-svc-glrfp [3.082507332s] May 14 11:12:43.186: INFO: Created: latency-svc-lhsnl May 14 11:12:43.231: INFO: Got endpoints: latency-svc-lhsnl [4.571647141s] May 14 11:12:44.357: INFO: Created: latency-svc-bt4c7 May 14 11:12:44.362: INFO: Got endpoints: latency-svc-bt4c7 [5.660059522s] May 14 11:12:44.395: INFO: Created: latency-svc-7t6dv May 14 11:12:44.467: INFO: Got endpoints: latency-svc-7t6dv [5.655699197s] May 14 11:12:44.497: INFO: Created: latency-svc-xb575 May 14 11:12:44.507: INFO: Got endpoints: latency-svc-xb575 [5.62620936s] May 14 11:12:44.527: INFO: Created: latency-svc-8sm4j May 14 11:12:44.543: INFO: Got endpoints: latency-svc-8sm4j [5.631219147s] May 14 11:12:44.563: INFO: Created: latency-svc-lxpw5 May 14 11:12:44.642: INFO: Got endpoints: latency-svc-lxpw5 [3.643269219s] May 14 11:12:44.643: INFO: Created: latency-svc-gr2tj May 14 11:12:44.652: INFO: Got endpoints: latency-svc-gr2tj [3.575994391s] May 14 11:12:44.689: INFO: Created: latency-svc-ptf8c May 14 11:12:44.719: INFO: Got endpoints: latency-svc-ptf8c [3.581114113s] May 14 11:12:44.796: INFO: Created: latency-svc-22f9v May 14 11:12:44.803: INFO: Got endpoints: latency-svc-22f9v [3.532779506s] May 14 11:12:45.187: INFO: Created: latency-svc-gmwbv May 14 11:12:45.278: INFO: Created: latency-svc-dnfs2 May 14 11:12:45.279: INFO: Got endpoints: latency-svc-gmwbv [3.920150379s] May 14 11:12:45.291: INFO: Got endpoints: latency-svc-dnfs2 [3.848835199s] May 14 11:12:45.348: INFO: Created: latency-svc-fj7mz May 14 11:12:45.427: INFO: Got endpoints: latency-svc-fj7mz [3.917562135s] May 14 11:12:45.492: INFO: Created: latency-svc-cl6x8 May 14 11:12:45.574: INFO: Got endpoints: latency-svc-cl6x8 [282.903063ms] May 14 11:12:45.622: INFO: Created: latency-svc-djq55 May 14 11:12:45.632: INFO: Got endpoints: latency-svc-djq55 [4.021136244s] May 14 11:12:45.674: INFO: Created: latency-svc-25gtn May 14 11:12:45.730: INFO: Got endpoints: latency-svc-25gtn [4.070672957s] May 14 11:12:45.738: INFO: Created: latency-svc-fjnbs May 14 11:12:45.756: INFO: Got endpoints: latency-svc-fjnbs [4.059485942s] May 14 11:12:45.786: INFO: Created: latency-svc-ndnq9 May 14 11:12:45.802: INFO: Got endpoints: latency-svc-ndnq9 [2.57054385s] May 14 11:12:45.828: INFO: Created: latency-svc-t4xl2 May 14 11:12:45.892: INFO: Got endpoints: latency-svc-t4xl2 [1.529401594s] May 14 11:12:45.925: INFO: Created: latency-svc-xtdqf May 14 11:12:45.946: INFO: Got endpoints: latency-svc-xtdqf [1.479583022s] May 14 11:12:45.972: INFO: Created: latency-svc-v7jdt May 14 11:12:45.982: INFO: Got endpoints: latency-svc-v7jdt [1.475898994s] May 14 11:12:46.042: INFO: Created: latency-svc-q74td May 14 11:12:46.049: INFO: Got endpoints: latency-svc-q74td [1.505854119s] May 14 11:12:46.075: INFO: Created: latency-svc-mgwjj May 14 11:12:46.092: INFO: Got endpoints: latency-svc-mgwjj [1.449535208s] May 14 11:12:46.111: INFO: Created: latency-svc-2sfjk May 14 11:12:46.134: INFO: Got endpoints: latency-svc-2sfjk [1.482029175s] May 14 11:12:46.199: INFO: Created: latency-svc-4nj6l May 14 11:12:46.224: INFO: Got endpoints: latency-svc-4nj6l [1.504973155s] May 14 11:12:46.260: INFO: Created: latency-svc-pb9f7 May 14 11:12:46.278: INFO: Got endpoints: latency-svc-pb9f7 [1.475547706s] May 14 11:12:46.350: INFO: Created: latency-svc-ktpv8 May 14 11:12:46.356: INFO: Got endpoints: latency-svc-ktpv8 [1.076924046s] May 14 11:12:46.399: INFO: Created: latency-svc-p68lc May 14 11:12:46.428: INFO: Got endpoints: latency-svc-p68lc [1.001065819s] May 14 11:12:46.522: INFO: Created: latency-svc-tjv6x May 14 11:12:46.543: INFO: Got endpoints: latency-svc-tjv6x [968.985741ms] May 14 11:12:46.572: INFO: Created: latency-svc-ksww2 May 14 11:12:46.585: INFO: Got endpoints: latency-svc-ksww2 [953.308313ms] May 14 11:12:46.616: INFO: Created: latency-svc-s6rx6 May 14 11:12:46.653: INFO: Got endpoints: latency-svc-s6rx6 [921.995397ms] May 14 11:12:46.701: INFO: Created: latency-svc-mph6n May 14 11:12:46.718: INFO: Got endpoints: latency-svc-mph6n [962.352907ms] May 14 11:12:46.824: INFO: Created: latency-svc-5rf89 May 14 11:12:46.850: INFO: Got endpoints: latency-svc-5rf89 [1.04865583s] May 14 11:12:46.947: INFO: Created: latency-svc-bdnsk May 14 11:12:46.949: INFO: Got endpoints: latency-svc-bdnsk [1.057497398s] May 14 11:12:46.987: INFO: Created: latency-svc-24lk6 May 14 11:12:47.028: INFO: Got endpoints: latency-svc-24lk6 [1.081528683s] May 14 11:12:47.138: INFO: Created: latency-svc-lktrp May 14 11:12:47.141: INFO: Got endpoints: latency-svc-lktrp [1.158671618s] May 14 11:12:47.172: INFO: Created: latency-svc-2sht4 May 14 11:12:47.188: INFO: Got endpoints: latency-svc-2sht4 [1.139080818s] May 14 11:12:47.232: INFO: Created: latency-svc-5lwwd May 14 11:12:47.293: INFO: Got endpoints: latency-svc-5lwwd [1.201660332s] May 14 11:12:47.328: INFO: Created: latency-svc-w7txs May 14 11:12:47.345: INFO: Got endpoints: latency-svc-w7txs [1.210981407s] May 14 11:12:47.371: INFO: Created: latency-svc-vbk9t May 14 11:12:47.393: INFO: Got endpoints: latency-svc-vbk9t [1.169297517s] May 14 11:12:47.467: INFO: Created: latency-svc-z4gqm May 14 11:12:47.483: INFO: Got endpoints: latency-svc-z4gqm [1.205033052s] May 14 11:12:47.526: INFO: Created: latency-svc-fdzk5 May 14 11:12:47.539: INFO: Got endpoints: latency-svc-fdzk5 [1.182647085s] May 14 11:12:47.629: INFO: Created: latency-svc-68fxp May 14 11:12:47.646: INFO: Got endpoints: latency-svc-68fxp [1.218286867s] May 14 11:12:47.682: INFO: Created: latency-svc-l6snj May 14 11:12:47.706: INFO: Got endpoints: latency-svc-l6snj [1.162964528s] May 14 11:12:47.790: INFO: Created: latency-svc-htwb8 May 14 11:12:47.802: INFO: Got endpoints: latency-svc-htwb8 [1.216533739s] May 14 11:12:47.880: INFO: Created: latency-svc-4nfzx May 14 11:12:47.935: INFO: Got endpoints: latency-svc-4nfzx [1.281928871s] May 14 11:12:47.971: INFO: Created: latency-svc-jm6hb May 14 11:12:47.983: INFO: Got endpoints: latency-svc-jm6hb [1.264435636s] May 14 11:12:48.013: INFO: Created: latency-svc-fnxvr May 14 11:12:48.026: INFO: Got endpoints: latency-svc-fnxvr [1.175174418s] May 14 11:12:48.085: INFO: Created: latency-svc-szbnl May 14 11:12:48.092: INFO: Got endpoints: latency-svc-szbnl [1.14290629s] May 14 11:12:48.120: INFO: Created: latency-svc-76dpg May 14 11:12:48.174: INFO: Got endpoints: latency-svc-76dpg [1.146215249s] May 14 11:12:48.246: INFO: Created: latency-svc-tb7df May 14 11:12:48.255: INFO: Got endpoints: latency-svc-tb7df [1.113597376s] May 14 11:12:48.276: INFO: Created: latency-svc-rf2mq May 14 11:12:48.291: INFO: Got endpoints: latency-svc-rf2mq [1.102787303s] May 14 11:12:48.312: INFO: Created: latency-svc-ttrjs May 14 11:12:48.330: INFO: Got endpoints: latency-svc-ttrjs [1.036973298s] May 14 11:12:48.401: INFO: Created: latency-svc-pxct7 May 14 11:12:48.404: INFO: Got endpoints: latency-svc-pxct7 [1.059016507s] May 14 11:12:48.438: INFO: Created: latency-svc-wqftb May 14 11:12:48.454: INFO: Got endpoints: latency-svc-wqftb [1.061213253s] May 14 11:12:48.474: INFO: Created: latency-svc-8wbwv May 14 11:12:48.491: INFO: Got endpoints: latency-svc-8wbwv [1.00758742s] May 14 11:12:48.551: INFO: Created: latency-svc-tpvhv May 14 11:12:48.554: INFO: Got endpoints: latency-svc-tpvhv [1.014691967s] May 14 11:12:48.576: INFO: Created: latency-svc-9spbb May 14 11:12:48.593: INFO: Got endpoints: latency-svc-9spbb [946.708767ms] May 14 11:12:48.624: INFO: Created: latency-svc-l4k8p May 14 11:12:48.641: INFO: Got endpoints: latency-svc-l4k8p [934.829221ms] May 14 11:12:48.719: INFO: Created: latency-svc-67qw4 May 14 11:12:48.722: INFO: Got endpoints: latency-svc-67qw4 [919.54547ms] May 14 11:12:48.781: INFO: Created: latency-svc-sqnm8 May 14 11:12:48.910: INFO: Got endpoints: latency-svc-sqnm8 [975.452785ms] May 14 11:12:48.936: INFO: Created: latency-svc-f59rp May 14 11:12:48.954: INFO: Got endpoints: latency-svc-f59rp [970.88986ms] May 14 11:12:48.984: INFO: Created: latency-svc-d2hzz May 14 11:12:49.072: INFO: Got endpoints: latency-svc-d2hzz [1.0463407s] May 14 11:12:49.098: INFO: Created: latency-svc-bxkbm May 14 11:12:49.116: INFO: Got endpoints: latency-svc-bxkbm [1.024123181s] May 14 11:12:49.140: INFO: Created: latency-svc-jr7d4 May 14 11:12:49.171: INFO: Got endpoints: latency-svc-jr7d4 [996.960383ms] May 14 11:12:49.229: INFO: Created: latency-svc-n2ztr May 14 11:12:49.262: INFO: Got endpoints: latency-svc-n2ztr [1.00658839s] May 14 11:12:49.302: INFO: Created: latency-svc-ndpl6 May 14 11:12:49.346: INFO: Got endpoints: latency-svc-ndpl6 [1.05527379s] May 14 11:12:49.374: INFO: Created: latency-svc-st5dn May 14 11:12:49.388: INFO: Got endpoints: latency-svc-st5dn [1.057606817s] May 14 11:12:49.422: INFO: Created: latency-svc-vh7gq May 14 11:12:49.527: INFO: Got endpoints: latency-svc-vh7gq [1.122733531s] May 14 11:12:49.529: INFO: Created: latency-svc-p42tg May 14 11:12:49.554: INFO: Got endpoints: latency-svc-p42tg [1.099429125s] May 14 11:12:49.591: INFO: Created: latency-svc-4lwtr May 14 11:12:49.605: INFO: Got endpoints: latency-svc-4lwtr [1.114129179s] May 14 11:12:49.689: INFO: Created: latency-svc-qbk87 May 14 11:12:49.695: INFO: Got endpoints: latency-svc-qbk87 [1.141379271s] May 14 11:12:49.728: INFO: Created: latency-svc-wkk29 May 14 11:12:49.744: INFO: Got endpoints: latency-svc-wkk29 [1.150531293s] May 14 11:12:49.764: INFO: Created: latency-svc-r7r2j May 14 11:12:49.780: INFO: Got endpoints: latency-svc-r7r2j [1.138683707s] May 14 11:12:49.845: INFO: Created: latency-svc-t998r May 14 11:12:49.851: INFO: Got endpoints: latency-svc-t998r [1.129013205s] May 14 11:12:49.908: INFO: Created: latency-svc-mpj25 May 14 11:12:49.939: INFO: Got endpoints: latency-svc-mpj25 [1.029416164s] May 14 11:12:50.000: INFO: Created: latency-svc-99hmz May 14 11:12:50.003: INFO: Got endpoints: latency-svc-99hmz [1.049255986s] May 14 11:12:50.040: INFO: Created: latency-svc-k8mcg May 14 11:12:50.063: INFO: Got endpoints: latency-svc-k8mcg [991.245534ms] May 14 11:12:50.157: INFO: Created: latency-svc-96ztc May 14 11:12:50.196: INFO: Got endpoints: latency-svc-96ztc [1.07913095s] May 14 11:12:50.232: INFO: Created: latency-svc-pnwcw May 14 11:12:50.250: INFO: Got endpoints: latency-svc-pnwcw [1.07913708s] May 14 11:12:50.306: INFO: Created: latency-svc-jss5k May 14 11:12:50.311: INFO: Got endpoints: latency-svc-jss5k [1.049363413s] May 14 11:12:50.346: INFO: Created: latency-svc-6mq7f May 14 11:12:50.375: INFO: Got endpoints: latency-svc-6mq7f [1.028797754s] May 14 11:12:50.455: INFO: Created: latency-svc-l8nj5 May 14 11:12:50.455: INFO: Got endpoints: latency-svc-l8nj5 [1.067467979s] May 14 11:12:50.490: INFO: Created: latency-svc-59xlh May 14 11:12:50.503: INFO: Got endpoints: latency-svc-59xlh [976.72819ms] May 14 11:12:50.531: INFO: Created: latency-svc-fdpgl May 14 11:12:50.605: INFO: Got endpoints: latency-svc-fdpgl [1.051048676s] May 14 11:12:50.634: INFO: Created: latency-svc-cdxhx May 14 11:12:50.661: INFO: Got endpoints: latency-svc-cdxhx [1.056190901s] May 14 11:12:50.682: INFO: Created: latency-svc-2ph4p May 14 11:12:50.697: INFO: Got endpoints: latency-svc-2ph4p [1.002008733s] May 14 11:12:50.755: INFO: Created: latency-svc-xzfcb May 14 11:12:50.763: INFO: Got endpoints: latency-svc-xzfcb [1.019899768s] May 14 11:12:50.796: INFO: Created: latency-svc-bz4nb May 14 11:12:50.830: INFO: Got endpoints: latency-svc-bz4nb [1.049527475s] May 14 11:12:50.916: INFO: Created: latency-svc-fkxrf May 14 11:12:50.920: INFO: Got endpoints: latency-svc-fkxrf [1.068939483s] May 14 11:12:50.951: INFO: Created: latency-svc-7hftd May 14 11:12:50.974: INFO: Got endpoints: latency-svc-7hftd [1.034726774s] May 14 11:12:51.012: INFO: Created: latency-svc-n6mxp May 14 11:12:51.108: INFO: Got endpoints: latency-svc-n6mxp [1.104370914s] May 14 11:12:51.112: INFO: Created: latency-svc-62jjg May 14 11:12:51.119: INFO: Got endpoints: latency-svc-62jjg [1.055754348s] May 14 11:12:51.183: INFO: Created: latency-svc-dz5sn May 14 11:12:51.198: INFO: Got endpoints: latency-svc-dz5sn [1.002085899s] May 14 11:12:51.252: INFO: Created: latency-svc-cjtxm May 14 11:12:51.281: INFO: Got endpoints: latency-svc-cjtxm [1.0308838s] May 14 11:12:51.330: INFO: Created: latency-svc-vbsl5 May 14 11:12:51.348: INFO: Got endpoints: latency-svc-vbsl5 [1.036987008s] May 14 11:12:51.401: INFO: Created: latency-svc-qlrbm May 14 11:12:51.414: INFO: Got endpoints: latency-svc-qlrbm [1.038902344s] May 14 11:12:51.455: INFO: Created: latency-svc-256dj May 14 11:12:51.474: INFO: Got endpoints: latency-svc-256dj [1.019051891s] May 14 11:12:51.569: INFO: Created: latency-svc-rkxw7 May 14 11:12:51.577: INFO: Got endpoints: latency-svc-rkxw7 [1.073717933s] May 14 11:12:51.619: INFO: Created: latency-svc-dp27t May 14 11:12:51.632: INFO: Got endpoints: latency-svc-dp27t [1.026792105s] May 14 11:12:51.713: INFO: Created: latency-svc-7gzs8 May 14 11:12:51.715: INFO: Got endpoints: latency-svc-7gzs8 [1.053813725s] May 14 11:12:51.810: INFO: Created: latency-svc-nld5m May 14 11:12:51.850: INFO: Got endpoints: latency-svc-nld5m [1.152823826s] May 14 11:12:51.875: INFO: Created: latency-svc-hczqz May 14 11:12:51.893: INFO: Got endpoints: latency-svc-hczqz [1.129000938s] May 14 11:12:51.930: INFO: Created: latency-svc-9krv9 May 14 11:12:51.944: INFO: Got endpoints: latency-svc-9krv9 [1.1146025s] May 14 11:12:52.008: INFO: Created: latency-svc-q42mt May 14 11:12:52.010: INFO: Got endpoints: latency-svc-q42mt [1.089986186s] May 14 11:12:52.037: INFO: Created: latency-svc-bxg2s May 14 11:12:52.054: INFO: Got endpoints: latency-svc-bxg2s [1.07941829s] May 14 11:12:52.098: INFO: Created: latency-svc-hrvfm May 14 11:12:52.150: INFO: Got endpoints: latency-svc-hrvfm [1.041847156s] May 14 11:12:52.176: INFO: Created: latency-svc-q5x87 May 14 11:12:52.192: INFO: Got endpoints: latency-svc-q5x87 [1.072986527s] May 14 11:12:52.218: INFO: Created: latency-svc-zdtvz May 14 11:12:52.235: INFO: Got endpoints: latency-svc-zdtvz [1.037463024s] May 14 11:12:52.371: INFO: Created: latency-svc-gphv6 May 14 11:12:52.403: INFO: Got endpoints: latency-svc-gphv6 [1.12190645s] May 14 11:12:52.445: INFO: Created: latency-svc-ht7vg May 14 11:12:52.470: INFO: Got endpoints: latency-svc-ht7vg [1.121493763s] May 14 11:12:52.509: INFO: Created: latency-svc-gk8vw May 14 11:12:52.553: INFO: Got endpoints: latency-svc-gk8vw [1.139293608s] May 14 11:12:52.720: INFO: Created: latency-svc-t8zrf May 14 11:12:52.725: INFO: Got endpoints: latency-svc-t8zrf [1.250610451s] May 14 11:12:52.811: INFO: Created: latency-svc-kc8pj May 14 11:12:52.910: INFO: Got endpoints: latency-svc-kc8pj [1.33297695s] May 14 11:12:52.920: INFO: Created: latency-svc-6ljqz May 14 11:12:52.968: INFO: Got endpoints: latency-svc-6ljqz [1.336491864s] May 14 11:12:53.072: INFO: Created: latency-svc-4vl9l May 14 11:12:53.082: INFO: Got endpoints: latency-svc-4vl9l [1.366843061s] May 14 11:12:53.142: INFO: Created: latency-svc-8lxkl May 14 11:12:53.222: INFO: Got endpoints: latency-svc-8lxkl [1.371485785s] May 14 11:12:53.255: INFO: Created: latency-svc-7kg77 May 14 11:12:53.293: INFO: Got endpoints: latency-svc-7kg77 [1.400199405s] May 14 11:12:53.395: INFO: Created: latency-svc-pj2dn May 14 11:12:53.407: INFO: Got endpoints: latency-svc-pj2dn [1.462564909s] May 14 11:12:53.448: INFO: Created: latency-svc-2btpw May 14 11:12:53.461: INFO: Got endpoints: latency-svc-2btpw [1.451636084s] May 14 11:12:53.548: INFO: Created: latency-svc-qfnzs May 14 11:12:53.548: INFO: Got endpoints: latency-svc-qfnzs [1.494536873s] May 14 11:12:53.591: INFO: Created: latency-svc-kgjjb May 14 11:12:53.601: INFO: Got endpoints: latency-svc-kgjjb [1.45121682s] May 14 11:12:53.739: INFO: Created: latency-svc-zrzmt May 14 11:12:53.743: INFO: Got endpoints: latency-svc-zrzmt [1.550309511s] May 14 11:12:53.942: INFO: Created: latency-svc-kmf5v May 14 11:12:53.943: INFO: Got endpoints: latency-svc-kmf5v [1.707396157s] May 14 11:12:54.019: INFO: Created: latency-svc-2qmcr May 14 11:12:54.032: INFO: Got endpoints: latency-svc-2qmcr [1.629076965s] May 14 11:12:54.104: INFO: Created: latency-svc-554n4 May 14 11:12:54.135: INFO: Got endpoints: latency-svc-554n4 [1.665433711s] May 14 11:12:54.306: INFO: Created: latency-svc-92c27 May 14 11:12:54.327: INFO: Got endpoints: latency-svc-92c27 [1.773351301s] May 14 11:12:54.355: INFO: Created: latency-svc-xblpx May 14 11:12:54.370: INFO: Got endpoints: latency-svc-xblpx [1.64440911s] May 14 11:12:54.390: INFO: Created: latency-svc-hhvqn May 14 11:12:54.461: INFO: Got endpoints: latency-svc-hhvqn [1.550492615s] May 14 11:12:54.463: INFO: Created: latency-svc-qr9tk May 14 11:12:54.472: INFO: Got endpoints: latency-svc-qr9tk [1.503834508s] May 14 11:12:54.511: INFO: Created: latency-svc-5jvn5 May 14 11:12:54.533: INFO: Got endpoints: latency-svc-5jvn5 [1.451057683s] May 14 11:12:54.647: INFO: Created: latency-svc-fc7cz May 14 11:12:54.660: INFO: Got endpoints: latency-svc-fc7cz [1.438440494s] May 14 11:12:54.708: INFO: Created: latency-svc-cwdjh May 14 11:12:54.737: INFO: Got endpoints: latency-svc-cwdjh [1.444249838s] May 14 11:12:54.796: INFO: Created: latency-svc-jjqfv May 14 11:12:54.803: INFO: Got endpoints: latency-svc-jjqfv [1.396314657s] May 14 11:12:54.840: INFO: Created: latency-svc-7vxvd May 14 11:12:54.858: INFO: Got endpoints: latency-svc-7vxvd [1.396729669s] May 14 11:12:54.971: INFO: Created: latency-svc-9rjh7 May 14 11:12:54.972: INFO: Got endpoints: latency-svc-9rjh7 [1.423579703s] May 14 11:12:54.972: INFO: Latencies: [233.227233ms 282.903063ms 348.240163ms 380.799397ms 491.092906ms 532.195622ms 568.065036ms 639.018569ms 689.063695ms 779.350534ms 833.43606ms 862.125901ms 881.228187ms 896.392017ms 899.043468ms 901.231899ms 905.50817ms 911.574575ms 918.105677ms 919.400879ms 919.54547ms 921.995397ms 924.707862ms 931.674406ms 932.160192ms 934.829221ms 936.381483ms 945.533152ms 945.711692ms 946.708767ms 949.559878ms 953.308313ms 953.400067ms 956.90804ms 959.520488ms 962.352907ms 965.368098ms 968.985741ms 969.369724ms 970.88986ms 971.035019ms 972.357505ms 975.452785ms 976.72819ms 978.546429ms 986.500034ms 987.588029ms 991.245534ms 995.227958ms 996.960383ms 1.001065819s 1.002008733s 1.002085899s 1.00658839s 1.00758742s 1.014691967s 1.019051891s 1.019899768s 1.024123181s 1.026792105s 1.028099283s 1.028797754s 1.029416164s 1.0308838s 1.032005227s 1.034726774s 1.035258492s 1.035470407s 1.036973298s 1.036987008s 1.037463024s 1.038902344s 1.041847156s 1.042121626s 1.0463407s 1.048535065s 1.04865583s 1.049255986s 1.049363413s 1.049527475s 1.051048676s 1.053813725s 1.05527379s 1.055754348s 1.056190901s 1.057497398s 1.057606817s 1.059016507s 1.059518387s 1.059972048s 1.060715084s 1.061213253s 1.065648719s 1.067467979s 1.068939483s 1.071486587s 1.072986527s 1.073004116s 1.073717933s 1.076924046s 1.07913095s 1.07913708s 1.07941829s 1.081087836s 1.081528683s 1.083358485s 1.087104962s 1.089986186s 1.099429125s 1.102787303s 1.104370914s 1.111662889s 1.113597376s 1.114129179s 1.1146025s 1.121458365s 1.121493763s 1.12190645s 1.122733531s 1.123245142s 1.129000938s 1.129013205s 1.138683707s 1.139080818s 1.139293608s 1.141379271s 1.14290629s 1.146215249s 1.150531293s 1.152823826s 1.158671618s 1.162964528s 1.169297517s 1.175174418s 1.182647085s 1.201660332s 1.205033052s 1.210981407s 1.216533739s 1.218286867s 1.250610451s 1.264435636s 1.281928871s 1.33297695s 1.336491864s 1.366843061s 1.371485785s 1.396314657s 1.396729669s 1.400199405s 1.423579703s 1.438440494s 1.444249838s 1.449535208s 1.451057683s 1.45121682s 1.451636084s 1.462564909s 1.475547706s 1.475898994s 1.479583022s 1.482029175s 1.494536873s 1.503834508s 1.504973155s 1.505854119s 1.529401594s 1.550309511s 1.550492615s 1.629076965s 1.64440911s 1.665433711s 1.707396157s 1.773351301s 2.57054385s 2.967495265s 2.972576449s 2.98447558s 3.012464733s 3.044974764s 3.061605658s 3.082507332s 3.102909373s 3.132280404s 3.144914793s 3.532779506s 3.575994391s 3.581114113s 3.643269219s 3.848835199s 3.917562135s 3.920150379s 4.021136244s 4.059485942s 4.070672957s 4.571647141s 5.62620936s 5.631219147s 5.655699197s 5.660059522s] May 14 11:12:54.972: INFO: 50 %ile: 1.07913095s May 14 11:12:54.972: INFO: 90 %ile: 3.061605658s May 14 11:12:54.972: INFO: 99 %ile: 5.655699197s May 14 11:12:54.972: INFO: Total sample count: 200 [AfterEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 14 11:12:54.972: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-svc-latency-fw5dn" for this suite. May 14 11:14:01.004: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 11:14:01.039: INFO: namespace: e2e-tests-svc-latency-fw5dn, resource: bindings, ignored listing per whitelist May 14 11:14:01.068: INFO: namespace e2e-tests-svc-latency-fw5dn deletion completed in 1m6.083497794s • [SLOW TEST:92.995 seconds] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 14 11:14:01.068: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward api env vars May 14 11:14:01.300: INFO: Waiting up to 5m0s for pod "downward-api-0380aba7-95d4-11ea-9b22-0242ac110018" in namespace "e2e-tests-downward-api-6ljf8" to be "success or failure" May 14 11:14:01.389: INFO: Pod "downward-api-0380aba7-95d4-11ea-9b22-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 89.230059ms May 14 11:14:03.392: INFO: Pod "downward-api-0380aba7-95d4-11ea-9b22-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.091634295s May 14 11:14:05.456: INFO: Pod "downward-api-0380aba7-95d4-11ea-9b22-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 4.155928065s May 14 11:14:07.460: INFO: Pod "downward-api-0380aba7-95d4-11ea-9b22-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.159895426s STEP: Saw pod success May 14 11:14:07.460: INFO: Pod "downward-api-0380aba7-95d4-11ea-9b22-0242ac110018" satisfied condition "success or failure" May 14 11:14:07.463: INFO: Trying to get logs from node hunter-worker pod downward-api-0380aba7-95d4-11ea-9b22-0242ac110018 container dapi-container: STEP: delete the pod May 14 11:14:07.550: INFO: Waiting for pod downward-api-0380aba7-95d4-11ea-9b22-0242ac110018 to disappear May 14 11:14:07.566: INFO: Pod downward-api-0380aba7-95d4-11ea-9b22-0242ac110018 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 14 11:14:07.566: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-6ljf8" for this suite. May 14 11:14:13.708: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 11:14:13.764: INFO: namespace: e2e-tests-downward-api-6ljf8, resource: bindings, ignored listing per whitelist May 14 11:14:13.773: INFO: namespace e2e-tests-downward-api-6ljf8 deletion completed in 6.203269173s • [SLOW TEST:12.705 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38 should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 14 11:14:13.773: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-0b1254a3-95d4-11ea-9b22-0242ac110018 STEP: Creating a pod to test consume secrets May 14 11:14:14.056: INFO: Waiting up to 5m0s for pod "pod-secrets-0b1c9b48-95d4-11ea-9b22-0242ac110018" in namespace "e2e-tests-secrets-4l7zj" to be "success or failure" May 14 11:14:14.082: INFO: Pod "pod-secrets-0b1c9b48-95d4-11ea-9b22-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 26.041254ms May 14 11:14:16.390: INFO: Pod "pod-secrets-0b1c9b48-95d4-11ea-9b22-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.333669184s May 14 11:14:18.392: INFO: Pod "pod-secrets-0b1c9b48-95d4-11ea-9b22-0242ac110018": Phase="Running", Reason="", readiness=true. Elapsed: 4.336250988s May 14 11:14:20.396: INFO: Pod "pod-secrets-0b1c9b48-95d4-11ea-9b22-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.33993004s STEP: Saw pod success May 14 11:14:20.396: INFO: Pod "pod-secrets-0b1c9b48-95d4-11ea-9b22-0242ac110018" satisfied condition "success or failure" May 14 11:14:20.400: INFO: Trying to get logs from node hunter-worker2 pod pod-secrets-0b1c9b48-95d4-11ea-9b22-0242ac110018 container secret-volume-test: STEP: delete the pod May 14 11:14:20.547: INFO: Waiting for pod pod-secrets-0b1c9b48-95d4-11ea-9b22-0242ac110018 to disappear May 14 11:14:20.576: INFO: Pod pod-secrets-0b1c9b48-95d4-11ea-9b22-0242ac110018 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 14 11:14:20.576: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-4l7zj" for this suite. May 14 11:14:26.603: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 11:14:26.624: INFO: namespace: e2e-tests-secrets-4l7zj, resource: bindings, ignored listing per whitelist May 14 11:14:26.679: INFO: namespace e2e-tests-secrets-4l7zj deletion completed in 6.09968429s STEP: Destroying namespace "e2e-tests-secret-namespace-dzk2v" for this suite. May 14 11:14:32.693: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 11:14:32.716: INFO: namespace: e2e-tests-secret-namespace-dzk2v, resource: bindings, ignored listing per whitelist May 14 11:14:32.767: INFO: namespace e2e-tests-secret-namespace-dzk2v deletion completed in 6.087166885s • [SLOW TEST:18.993 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [k8s.io] Probing container should have monotonically increasing restart count [Slow][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 14 11:14:32.767: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] should have monotonically increasing restart count [Slow][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-vggkw May 14 11:14:38.895: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-vggkw STEP: checking the pod's current state and verifying that restartCount is present May 14 11:14:38.899: INFO: Initial restart count of pod liveness-http is 0 May 14 11:14:52.930: INFO: Restart count of pod e2e-tests-container-probe-vggkw/liveness-http is now 1 (14.031321982s elapsed) May 14 11:15:13.489: INFO: Restart count of pod e2e-tests-container-probe-vggkw/liveness-http is now 2 (34.590369856s elapsed) May 14 11:15:33.693: INFO: Restart count of pod e2e-tests-container-probe-vggkw/liveness-http is now 3 (54.794922739s elapsed) May 14 11:15:54.072: INFO: Restart count of pod e2e-tests-container-probe-vggkw/liveness-http is now 4 (1m15.173474313s elapsed) May 14 11:17:13.282: INFO: Restart count of pod e2e-tests-container-probe-vggkw/liveness-http is now 5 (2m34.383286206s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 14 11:17:13.297: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-vggkw" for this suite. May 14 11:17:19.480: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 11:17:20.387: INFO: namespace: e2e-tests-container-probe-vggkw, resource: bindings, ignored listing per whitelist May 14 11:17:20.425: INFO: namespace e2e-tests-container-probe-vggkw deletion completed in 7.123090716s • [SLOW TEST:167.658 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should have monotonically increasing restart count [Slow][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 14 11:17:20.425: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-7b572e81-95d4-11ea-9b22-0242ac110018 STEP: Creating a pod to test consume secrets May 14 11:17:22.797: INFO: Waiting up to 5m0s for pod "pod-secrets-7b77f706-95d4-11ea-9b22-0242ac110018" in namespace "e2e-tests-secrets-dkzmm" to be "success or failure" May 14 11:17:22.800: INFO: Pod "pod-secrets-7b77f706-95d4-11ea-9b22-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.333978ms May 14 11:17:25.750: INFO: Pod "pod-secrets-7b77f706-95d4-11ea-9b22-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.952976896s May 14 11:17:27.754: INFO: Pod "pod-secrets-7b77f706-95d4-11ea-9b22-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 4.956411937s May 14 11:17:30.031: INFO: Pod "pod-secrets-7b77f706-95d4-11ea-9b22-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 7.233158624s May 14 11:17:32.034: INFO: Pod "pod-secrets-7b77f706-95d4-11ea-9b22-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 9.236616007s STEP: Saw pod success May 14 11:17:32.034: INFO: Pod "pod-secrets-7b77f706-95d4-11ea-9b22-0242ac110018" satisfied condition "success or failure" May 14 11:17:32.036: INFO: Trying to get logs from node hunter-worker pod pod-secrets-7b77f706-95d4-11ea-9b22-0242ac110018 container secret-env-test: STEP: delete the pod May 14 11:17:33.063: INFO: Waiting for pod pod-secrets-7b77f706-95d4-11ea-9b22-0242ac110018 to disappear May 14 11:17:33.080: INFO: Pod pod-secrets-7b77f706-95d4-11ea-9b22-0242ac110018 no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 14 11:17:33.080: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-dkzmm" for this suite. May 14 11:17:41.507: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 11:17:41.540: INFO: namespace: e2e-tests-secrets-dkzmm, resource: bindings, ignored listing per whitelist May 14 11:17:41.565: INFO: namespace e2e-tests-secrets-dkzmm deletion completed in 8.093921062s • [SLOW TEST:21.139 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:32 should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 14 11:17:41.565: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-dxszs STEP: creating a selector STEP: Creating the service pods in kubernetes May 14 11:17:41.651: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods May 14 11:18:13.785: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.248:8080/dial?request=hostName&protocol=udp&host=10.244.2.103&port=8081&tries=1'] Namespace:e2e-tests-pod-network-test-dxszs PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 14 11:18:13.785: INFO: >>> kubeConfig: /root/.kube/config I0514 11:18:13.819242 6 log.go:172] (0xc0005e7ad0) (0xc0021d6a00) Create stream I0514 11:18:13.819271 6 log.go:172] (0xc0005e7ad0) (0xc0021d6a00) Stream added, broadcasting: 1 I0514 11:18:13.820941 6 log.go:172] (0xc0005e7ad0) Reply frame received for 1 I0514 11:18:13.820978 6 log.go:172] (0xc0005e7ad0) (0xc001900280) Create stream I0514 11:18:13.820990 6 log.go:172] (0xc0005e7ad0) (0xc001900280) Stream added, broadcasting: 3 I0514 11:18:13.822124 6 log.go:172] (0xc0005e7ad0) Reply frame received for 3 I0514 11:18:13.822162 6 log.go:172] (0xc0005e7ad0) (0xc001900320) Create stream I0514 11:18:13.822176 6 log.go:172] (0xc0005e7ad0) (0xc001900320) Stream added, broadcasting: 5 I0514 11:18:13.823141 6 log.go:172] (0xc0005e7ad0) Reply frame received for 5 I0514 11:18:13.977746 6 log.go:172] (0xc0005e7ad0) Data frame received for 3 I0514 11:18:13.977772 6 log.go:172] (0xc001900280) (3) Data frame handling I0514 11:18:13.977786 6 log.go:172] (0xc001900280) (3) Data frame sent I0514 11:18:13.978300 6 log.go:172] (0xc0005e7ad0) Data frame received for 5 I0514 11:18:13.978313 6 log.go:172] (0xc001900320) (5) Data frame handling I0514 11:18:13.978366 6 log.go:172] (0xc0005e7ad0) Data frame received for 3 I0514 11:18:13.978386 6 log.go:172] (0xc001900280) (3) Data frame handling I0514 11:18:13.980020 6 log.go:172] (0xc0005e7ad0) Data frame received for 1 I0514 11:18:13.980034 6 log.go:172] (0xc0021d6a00) (1) Data frame handling I0514 11:18:13.980042 6 log.go:172] (0xc0021d6a00) (1) Data frame sent I0514 11:18:13.980050 6 log.go:172] (0xc0005e7ad0) (0xc0021d6a00) Stream removed, broadcasting: 1 I0514 11:18:13.980061 6 log.go:172] (0xc0005e7ad0) Go away received I0514 11:18:13.980227 6 log.go:172] (0xc0005e7ad0) (0xc0021d6a00) Stream removed, broadcasting: 1 I0514 11:18:13.980256 6 log.go:172] (0xc0005e7ad0) (0xc001900280) Stream removed, broadcasting: 3 I0514 11:18:13.980272 6 log.go:172] (0xc0005e7ad0) (0xc001900320) Stream removed, broadcasting: 5 May 14 11:18:13.980: INFO: Waiting for endpoints: map[] May 14 11:18:13.982: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.248:8080/dial?request=hostName&protocol=udp&host=10.244.1.247&port=8081&tries=1'] Namespace:e2e-tests-pod-network-test-dxszs PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 14 11:18:13.982: INFO: >>> kubeConfig: /root/.kube/config I0514 11:18:14.007054 6 log.go:172] (0xc000921970) (0xc00223cbe0) Create stream I0514 11:18:14.007070 6 log.go:172] (0xc000921970) (0xc00223cbe0) Stream added, broadcasting: 1 I0514 11:18:14.008682 6 log.go:172] (0xc000921970) Reply frame received for 1 I0514 11:18:14.008707 6 log.go:172] (0xc000921970) (0xc0020bcb40) Create stream I0514 11:18:14.008719 6 log.go:172] (0xc000921970) (0xc0020bcb40) Stream added, broadcasting: 3 I0514 11:18:14.009584 6 log.go:172] (0xc000921970) Reply frame received for 3 I0514 11:18:14.009622 6 log.go:172] (0xc000921970) (0xc00223cc80) Create stream I0514 11:18:14.009635 6 log.go:172] (0xc000921970) (0xc00223cc80) Stream added, broadcasting: 5 I0514 11:18:14.010528 6 log.go:172] (0xc000921970) Reply frame received for 5 I0514 11:18:14.061906 6 log.go:172] (0xc000921970) Data frame received for 3 I0514 11:18:14.061983 6 log.go:172] (0xc0020bcb40) (3) Data frame handling I0514 11:18:14.062007 6 log.go:172] (0xc0020bcb40) (3) Data frame sent I0514 11:18:14.062359 6 log.go:172] (0xc000921970) Data frame received for 3 I0514 11:18:14.062381 6 log.go:172] (0xc0020bcb40) (3) Data frame handling I0514 11:18:14.062832 6 log.go:172] (0xc000921970) Data frame received for 5 I0514 11:18:14.062844 6 log.go:172] (0xc00223cc80) (5) Data frame handling I0514 11:18:14.063526 6 log.go:172] (0xc000921970) Data frame received for 1 I0514 11:18:14.063549 6 log.go:172] (0xc00223cbe0) (1) Data frame handling I0514 11:18:14.063564 6 log.go:172] (0xc00223cbe0) (1) Data frame sent I0514 11:18:14.063579 6 log.go:172] (0xc000921970) (0xc00223cbe0) Stream removed, broadcasting: 1 I0514 11:18:14.063652 6 log.go:172] (0xc000921970) (0xc00223cbe0) Stream removed, broadcasting: 1 I0514 11:18:14.063662 6 log.go:172] (0xc000921970) (0xc0020bcb40) Stream removed, broadcasting: 3 I0514 11:18:14.063745 6 log.go:172] (0xc000921970) Go away received I0514 11:18:14.063761 6 log.go:172] (0xc000921970) (0xc00223cc80) Stream removed, broadcasting: 5 May 14 11:18:14.063: INFO: Waiting for endpoints: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 14 11:18:14.063: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pod-network-test-dxszs" for this suite. May 14 11:18:48.114: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 11:18:48.138: INFO: namespace: e2e-tests-pod-network-test-dxszs, resource: bindings, ignored listing per whitelist May 14 11:18:48.174: INFO: namespace e2e-tests-pod-network-test-dxszs deletion completed in 34.108199612s • [SLOW TEST:66.610 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 14 11:18:48.174: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not cause race condition when used for configmaps [Serial] [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating 50 configmaps STEP: Creating RC which spawns configmap-volume pods May 14 11:18:52.637: INFO: Pod name wrapped-volume-race-b129ca0d-95d4-11ea-9b22-0242ac110018: Found 0 pods out of 5 May 14 11:18:57.788: INFO: Pod name wrapped-volume-race-b129ca0d-95d4-11ea-9b22-0242ac110018: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-b129ca0d-95d4-11ea-9b22-0242ac110018 in namespace e2e-tests-emptydir-wrapper-fcv9s, will wait for the garbage collector to delete the pods May 14 11:20:49.864: INFO: Deleting ReplicationController wrapped-volume-race-b129ca0d-95d4-11ea-9b22-0242ac110018 took: 6.861534ms May 14 11:20:49.964: INFO: Terminating ReplicationController wrapped-volume-race-b129ca0d-95d4-11ea-9b22-0242ac110018 pods took: 100.21147ms STEP: Creating RC which spawns configmap-volume pods May 14 11:21:41.628: INFO: Pod name wrapped-volume-race-15ddc742-95d5-11ea-9b22-0242ac110018: Found 0 pods out of 5 May 14 11:21:46.635: INFO: Pod name wrapped-volume-race-15ddc742-95d5-11ea-9b22-0242ac110018: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-15ddc742-95d5-11ea-9b22-0242ac110018 in namespace e2e-tests-emptydir-wrapper-fcv9s, will wait for the garbage collector to delete the pods May 14 11:23:31.565: INFO: Deleting ReplicationController wrapped-volume-race-15ddc742-95d5-11ea-9b22-0242ac110018 took: 8.016635ms May 14 11:23:32.266: INFO: Terminating ReplicationController wrapped-volume-race-15ddc742-95d5-11ea-9b22-0242ac110018 pods took: 700.259011ms STEP: Creating RC which spawns configmap-volume pods May 14 11:24:22.412: INFO: Pod name wrapped-volume-race-75b61fc0-95d5-11ea-9b22-0242ac110018: Found 0 pods out of 5 May 14 11:24:27.420: INFO: Pod name wrapped-volume-race-75b61fc0-95d5-11ea-9b22-0242ac110018: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-75b61fc0-95d5-11ea-9b22-0242ac110018 in namespace e2e-tests-emptydir-wrapper-fcv9s, will wait for the garbage collector to delete the pods May 14 11:26:11.492: INFO: Deleting ReplicationController wrapped-volume-race-75b61fc0-95d5-11ea-9b22-0242ac110018 took: 5.861458ms May 14 11:26:11.692: INFO: Terminating ReplicationController wrapped-volume-race-75b61fc0-95d5-11ea-9b22-0242ac110018 pods took: 200.230299ms STEP: Cleaning up the configMaps [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 14 11:27:23.056: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-wrapper-fcv9s" for this suite. May 14 11:27:31.070: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 11:27:31.102: INFO: namespace: e2e-tests-emptydir-wrapper-fcv9s, resource: bindings, ignored listing per whitelist May 14 11:27:31.140: INFO: namespace e2e-tests-emptydir-wrapper-fcv9s deletion completed in 8.078808877s • [SLOW TEST:522.965 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 should not cause race condition when used for configmaps [Serial] [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 14 11:27:31.140: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74 STEP: Creating service test in namespace e2e-tests-statefulset-48n42 [It] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Looking for a node to schedule stateful set and pod STEP: Creating pod with conflicting port in namespace e2e-tests-statefulset-48n42 STEP: Creating statefulset with conflicting port in namespace e2e-tests-statefulset-48n42 STEP: Waiting until pod test-pod will start running in namespace e2e-tests-statefulset-48n42 STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace e2e-tests-statefulset-48n42 May 14 11:27:39.311: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-48n42, name: ss-0, uid: e82d8e76-95d5-11ea-99e8-0242ac110002, status phase: Pending. Waiting for statefulset controller to delete. May 14 11:27:41.243: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-48n42, name: ss-0, uid: e82d8e76-95d5-11ea-99e8-0242ac110002, status phase: Failed. Waiting for statefulset controller to delete. May 14 11:27:41.291: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-48n42, name: ss-0, uid: e82d8e76-95d5-11ea-99e8-0242ac110002, status phase: Failed. Waiting for statefulset controller to delete. May 14 11:27:41.320: INFO: Observed delete event for stateful pod ss-0 in namespace e2e-tests-statefulset-48n42 STEP: Removing pod with conflicting port in namespace e2e-tests-statefulset-48n42 STEP: Waiting when stateful pod ss-0 will be recreated in namespace e2e-tests-statefulset-48n42 and will be in running state [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85 May 14 11:27:47.578: INFO: Deleting all statefulset in ns e2e-tests-statefulset-48n42 May 14 11:27:47.581: INFO: Scaling statefulset ss to 0 May 14 11:28:08.311: INFO: Waiting for statefulset status.replicas updated to 0 May 14 11:28:08.351: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 14 11:28:08.424: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-statefulset-48n42" for this suite. May 14 11:28:16.486: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 11:28:16.503: INFO: namespace: e2e-tests-statefulset-48n42, resource: bindings, ignored listing per whitelist May 14 11:28:16.546: INFO: namespace e2e-tests-statefulset-48n42 deletion completed in 8.073432207s • [SLOW TEST:45.406 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 14 11:28:16.546: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-015b4dba-95d6-11ea-9b22-0242ac110018 STEP: Creating a pod to test consume configMaps May 14 11:28:16.677: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-015bf2f3-95d6-11ea-9b22-0242ac110018" in namespace "e2e-tests-projected-rxrp2" to be "success or failure" May 14 11:28:16.680: INFO: Pod "pod-projected-configmaps-015bf2f3-95d6-11ea-9b22-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.741716ms May 14 11:28:18.682: INFO: Pod "pod-projected-configmaps-015bf2f3-95d6-11ea-9b22-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005237796s May 14 11:28:23.258: INFO: Pod "pod-projected-configmaps-015bf2f3-95d6-11ea-9b22-0242ac110018": Phase="Running", Reason="", readiness=true. Elapsed: 6.58097438s May 14 11:28:25.261: INFO: Pod "pod-projected-configmaps-015bf2f3-95d6-11ea-9b22-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.58424288s STEP: Saw pod success May 14 11:28:25.261: INFO: Pod "pod-projected-configmaps-015bf2f3-95d6-11ea-9b22-0242ac110018" satisfied condition "success or failure" May 14 11:28:25.264: INFO: Trying to get logs from node hunter-worker2 pod pod-projected-configmaps-015bf2f3-95d6-11ea-9b22-0242ac110018 container projected-configmap-volume-test: STEP: delete the pod May 14 11:28:25.814: INFO: Waiting for pod pod-projected-configmaps-015bf2f3-95d6-11ea-9b22-0242ac110018 to disappear May 14 11:28:25.828: INFO: Pod pod-projected-configmaps-015bf2f3-95d6-11ea-9b22-0242ac110018 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 14 11:28:25.828: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-rxrp2" for this suite. May 14 11:28:32.556: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 11:28:32.605: INFO: namespace: e2e-tests-projected-rxrp2, resource: bindings, ignored listing per whitelist May 14 11:28:32.607: INFO: namespace e2e-tests-projected-rxrp2 deletion completed in 6.512490576s • [SLOW TEST:16.061 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 14 11:28:32.607: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name s-test-opt-del-0af11a38-95d6-11ea-9b22-0242ac110018 STEP: Creating secret with name s-test-opt-upd-0af11a91-95d6-11ea-9b22-0242ac110018 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-0af11a38-95d6-11ea-9b22-0242ac110018 STEP: Updating secret s-test-opt-upd-0af11a91-95d6-11ea-9b22-0242ac110018 STEP: Creating secret with name s-test-opt-create-0af11ab3-95d6-11ea-9b22-0242ac110018 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 14 11:28:44.899: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-pf4c5" for this suite. May 14 11:29:08.918: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 11:29:08.939: INFO: namespace: e2e-tests-projected-pf4c5, resource: bindings, ignored listing per whitelist May 14 11:29:08.994: INFO: namespace e2e-tests-projected-pf4c5 deletion completed in 24.092246659s • [SLOW TEST:36.387 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 14 11:29:08.994: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test override all May 14 11:29:09.335: INFO: Waiting up to 5m0s for pod "client-containers-20bb1e1a-95d6-11ea-9b22-0242ac110018" in namespace "e2e-tests-containers-bntx9" to be "success or failure" May 14 11:29:09.437: INFO: Pod "client-containers-20bb1e1a-95d6-11ea-9b22-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 102.747185ms May 14 11:29:11.539: INFO: Pod "client-containers-20bb1e1a-95d6-11ea-9b22-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.204810623s May 14 11:29:13.544: INFO: Pod "client-containers-20bb1e1a-95d6-11ea-9b22-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.209192483s STEP: Saw pod success May 14 11:29:13.544: INFO: Pod "client-containers-20bb1e1a-95d6-11ea-9b22-0242ac110018" satisfied condition "success or failure" May 14 11:29:13.547: INFO: Trying to get logs from node hunter-worker pod client-containers-20bb1e1a-95d6-11ea-9b22-0242ac110018 container test-container: STEP: delete the pod May 14 11:29:13.606: INFO: Waiting for pod client-containers-20bb1e1a-95d6-11ea-9b22-0242ac110018 to disappear May 14 11:29:13.622: INFO: Pod client-containers-20bb1e1a-95d6-11ea-9b22-0242ac110018 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 14 11:29:13.622: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-containers-bntx9" for this suite. May 14 11:29:21.644: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 11:29:21.694: INFO: namespace: e2e-tests-containers-bntx9, resource: bindings, ignored listing per whitelist May 14 11:29:21.713: INFO: namespace e2e-tests-containers-bntx9 deletion completed in 8.088553443s • [SLOW TEST:12.719 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 14 11:29:21.713: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a watch on configmaps with label A STEP: creating a watch on configmaps with label B STEP: creating a watch on configmaps with label A or B STEP: creating a configmap with label A and ensuring the correct watchers observe the notification May 14 11:29:21.826: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-7pp8d,SelfLink:/api/v1/namespaces/e2e-tests-watch-7pp8d/configmaps/e2e-watch-test-configmap-a,UID:2833e4e8-95d6-11ea-99e8-0242ac110002,ResourceVersion:10522264,Generation:0,CreationTimestamp:2020-05-14 11:29:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} May 14 11:29:21.826: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-7pp8d,SelfLink:/api/v1/namespaces/e2e-tests-watch-7pp8d/configmaps/e2e-watch-test-configmap-a,UID:2833e4e8-95d6-11ea-99e8-0242ac110002,ResourceVersion:10522264,Generation:0,CreationTimestamp:2020-05-14 11:29:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: modifying configmap A and ensuring the correct watchers observe the notification May 14 11:29:31.833: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-7pp8d,SelfLink:/api/v1/namespaces/e2e-tests-watch-7pp8d/configmaps/e2e-watch-test-configmap-a,UID:2833e4e8-95d6-11ea-99e8-0242ac110002,ResourceVersion:10522285,Generation:0,CreationTimestamp:2020-05-14 11:29:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} May 14 11:29:31.833: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-7pp8d,SelfLink:/api/v1/namespaces/e2e-tests-watch-7pp8d/configmaps/e2e-watch-test-configmap-a,UID:2833e4e8-95d6-11ea-99e8-0242ac110002,ResourceVersion:10522285,Generation:0,CreationTimestamp:2020-05-14 11:29:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying configmap A again and ensuring the correct watchers observe the notification May 14 11:29:41.840: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-7pp8d,SelfLink:/api/v1/namespaces/e2e-tests-watch-7pp8d/configmaps/e2e-watch-test-configmap-a,UID:2833e4e8-95d6-11ea-99e8-0242ac110002,ResourceVersion:10522304,Generation:0,CreationTimestamp:2020-05-14 11:29:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} May 14 11:29:41.840: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-7pp8d,SelfLink:/api/v1/namespaces/e2e-tests-watch-7pp8d/configmaps/e2e-watch-test-configmap-a,UID:2833e4e8-95d6-11ea-99e8-0242ac110002,ResourceVersion:10522304,Generation:0,CreationTimestamp:2020-05-14 11:29:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: deleting configmap A and ensuring the correct watchers observe the notification May 14 11:29:51.846: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-7pp8d,SelfLink:/api/v1/namespaces/e2e-tests-watch-7pp8d/configmaps/e2e-watch-test-configmap-a,UID:2833e4e8-95d6-11ea-99e8-0242ac110002,ResourceVersion:10522324,Generation:0,CreationTimestamp:2020-05-14 11:29:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} May 14 11:29:51.847: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-7pp8d,SelfLink:/api/v1/namespaces/e2e-tests-watch-7pp8d/configmaps/e2e-watch-test-configmap-a,UID:2833e4e8-95d6-11ea-99e8-0242ac110002,ResourceVersion:10522324,Generation:0,CreationTimestamp:2020-05-14 11:29:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: creating a configmap with label B and ensuring the correct watchers observe the notification May 14 11:30:01.854: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-7pp8d,SelfLink:/api/v1/namespaces/e2e-tests-watch-7pp8d/configmaps/e2e-watch-test-configmap-b,UID:400f0cf3-95d6-11ea-99e8-0242ac110002,ResourceVersion:10522344,Generation:0,CreationTimestamp:2020-05-14 11:30:01 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} May 14 11:30:01.854: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-7pp8d,SelfLink:/api/v1/namespaces/e2e-tests-watch-7pp8d/configmaps/e2e-watch-test-configmap-b,UID:400f0cf3-95d6-11ea-99e8-0242ac110002,ResourceVersion:10522344,Generation:0,CreationTimestamp:2020-05-14 11:30:01 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: deleting configmap B and ensuring the correct watchers observe the notification May 14 11:30:12.075: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-7pp8d,SelfLink:/api/v1/namespaces/e2e-tests-watch-7pp8d/configmaps/e2e-watch-test-configmap-b,UID:400f0cf3-95d6-11ea-99e8-0242ac110002,ResourceVersion:10522363,Generation:0,CreationTimestamp:2020-05-14 11:30:01 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} May 14 11:30:12.075: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-7pp8d,SelfLink:/api/v1/namespaces/e2e-tests-watch-7pp8d/configmaps/e2e-watch-test-configmap-b,UID:400f0cf3-95d6-11ea-99e8-0242ac110002,ResourceVersion:10522363,Generation:0,CreationTimestamp:2020-05-14 11:30:01 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 14 11:30:22.075: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-watch-7pp8d" for this suite. May 14 11:30:28.096: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 11:30:28.156: INFO: namespace: e2e-tests-watch-7pp8d, resource: bindings, ignored listing per whitelist May 14 11:30:28.683: INFO: namespace e2e-tests-watch-7pp8d deletion completed in 6.603685268s • [SLOW TEST:66.970 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 14 11:30:28.684: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0777 on node default medium May 14 11:30:29.089: INFO: Waiting up to 5m0s for pod "pod-503b4526-95d6-11ea-9b22-0242ac110018" in namespace "e2e-tests-emptydir-nj5cz" to be "success or failure" May 14 11:30:29.148: INFO: Pod "pod-503b4526-95d6-11ea-9b22-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 58.914353ms May 14 11:30:31.152: INFO: Pod "pod-503b4526-95d6-11ea-9b22-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.062388601s May 14 11:30:33.156: INFO: Pod "pod-503b4526-95d6-11ea-9b22-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 4.066634684s May 14 11:30:35.511: INFO: Pod "pod-503b4526-95d6-11ea-9b22-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.421485813s STEP: Saw pod success May 14 11:30:35.511: INFO: Pod "pod-503b4526-95d6-11ea-9b22-0242ac110018" satisfied condition "success or failure" May 14 11:30:35.514: INFO: Trying to get logs from node hunter-worker2 pod pod-503b4526-95d6-11ea-9b22-0242ac110018 container test-container: STEP: delete the pod May 14 11:30:36.242: INFO: Waiting for pod pod-503b4526-95d6-11ea-9b22-0242ac110018 to disappear May 14 11:30:36.744: INFO: Pod pod-503b4526-95d6-11ea-9b22-0242ac110018 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 14 11:30:36.744: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-nj5cz" for this suite. May 14 11:30:43.204: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 11:30:43.330: INFO: namespace: e2e-tests-emptydir-nj5cz, resource: bindings, ignored listing per whitelist May 14 11:30:43.362: INFO: namespace e2e-tests-emptydir-nj5cz deletion completed in 6.612876525s • [SLOW TEST:14.678 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 14 11:30:43.362: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 14 11:30:43.545: INFO: (0) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 5.754877ms) May 14 11:30:43.548: INFO: (1) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.244543ms) May 14 11:30:43.551: INFO: (2) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.989958ms) May 14 11:30:43.554: INFO: (3) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.120064ms) May 14 11:30:43.631: INFO: (4) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 77.129235ms) May 14 11:30:43.635: INFO: (5) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 4.056739ms) May 14 11:30:43.639: INFO: (6) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.682694ms) May 14 11:30:43.643: INFO: (7) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.87821ms) May 14 11:30:43.647: INFO: (8) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 4.198679ms) May 14 11:30:43.651: INFO: (9) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.794578ms) May 14 11:30:43.654: INFO: (10) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.279014ms) May 14 11:30:43.658: INFO: (11) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.70599ms) May 14 11:30:43.661: INFO: (12) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.325443ms) May 14 11:30:43.664: INFO: (13) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.057817ms) May 14 11:30:43.668: INFO: (14) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.232167ms) May 14 11:30:43.671: INFO: (15) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.29642ms) May 14 11:30:43.674: INFO: (16) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.31368ms) May 14 11:30:43.678: INFO: (17) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.343087ms) May 14 11:30:43.681: INFO: (18) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.131051ms) May 14 11:30:43.684: INFO: (19) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.863335ms) [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 14 11:30:43.684: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-proxy-gfvw2" for this suite. May 14 11:30:49.722: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 11:30:49.766: INFO: namespace: e2e-tests-proxy-gfvw2, resource: bindings, ignored listing per whitelist May 14 11:30:49.778: INFO: namespace e2e-tests-proxy-gfvw2 deletion completed in 6.09083554s • [SLOW TEST:6.416 seconds] [sig-network] Proxy /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:56 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 14 11:30:49.778: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 14 11:31:49.885: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-ddzrr" for this suite. May 14 11:32:11.907: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 11:32:11.941: INFO: namespace: e2e-tests-container-probe-ddzrr, resource: bindings, ignored listing per whitelist May 14 11:32:11.982: INFO: namespace e2e-tests-container-probe-ddzrr deletion completed in 22.091892521s • [SLOW TEST:82.204 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 14 11:32:11.982: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65 [It] RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 14 11:32:12.052: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted) May 14 11:32:12.103: INFO: Pod name sample-pod: Found 0 pods out of 1 May 14 11:32:17.108: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running May 14 11:32:17.108: INFO: Creating deployment "test-rolling-update-deployment" May 14 11:32:17.112: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has May 14 11:32:17.125: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created May 14 11:32:19.157: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected May 14 11:32:19.159: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725052737, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725052737, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725052737, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725052737, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)} May 14 11:32:21.163: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725052737, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725052737, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725052737, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725052737, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)} May 14 11:32:23.163: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted) [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59 May 14 11:32:23.170: INFO: Deployment "test-rolling-update-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment,GenerateName:,Namespace:e2e-tests-deployment-k4h4q,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-k4h4q/deployments/test-rolling-update-deployment,UID:90ae3675-95d6-11ea-99e8-0242ac110002,ResourceVersion:10522723,Generation:1,CreationTimestamp:2020-05-14 11:32:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-05-14 11:32:17 +0000 UTC 2020-05-14 11:32:17 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-05-14 11:32:22 +0000 UTC 2020-05-14 11:32:17 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rolling-update-deployment-75db98fb4c" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},} May 14 11:32:23.173: INFO: New ReplicaSet "test-rolling-update-deployment-75db98fb4c" of Deployment "test-rolling-update-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-75db98fb4c,GenerateName:,Namespace:e2e-tests-deployment-k4h4q,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-k4h4q/replicasets/test-rolling-update-deployment-75db98fb4c,UID:90b16a1b-95d6-11ea-99e8-0242ac110002,ResourceVersion:10522714,Generation:1,CreationTimestamp:2020-05-14 11:32:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment 90ae3675-95d6-11ea-99e8-0242ac110002 0xc001e75627 0xc001e75628}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} May 14 11:32:23.173: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment": May 14 11:32:23.173: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-controller,GenerateName:,Namespace:e2e-tests-deployment-k4h4q,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-k4h4q/replicasets/test-rolling-update-controller,UID:8daabf4e-95d6-11ea-99e8-0242ac110002,ResourceVersion:10522722,Generation:2,CreationTimestamp:2020-05-14 11:32:12 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305832,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment 90ae3675-95d6-11ea-99e8-0242ac110002 0xc001e75567 0xc001e75568}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} May 14 11:32:23.176: INFO: Pod "test-rolling-update-deployment-75db98fb4c-fn9r5" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-75db98fb4c-fn9r5,GenerateName:test-rolling-update-deployment-75db98fb4c-,Namespace:e2e-tests-deployment-k4h4q,SelfLink:/api/v1/namespaces/e2e-tests-deployment-k4h4q/pods/test-rolling-update-deployment-75db98fb4c-fn9r5,UID:90b3229f-95d6-11ea-99e8-0242ac110002,ResourceVersion:10522713,Generation:0,CreationTimestamp:2020-05-14 11:32:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rolling-update-deployment-75db98fb4c 90b16a1b-95d6-11ea-99e8-0242ac110002 0xc001f4a2d7 0xc001f4a2d8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-xzjc2 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-xzjc2,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-xzjc2 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001f4a350} {node.kubernetes.io/unreachable Exists NoExecute 0xc001f4a370}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 11:32:17 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 11:32:22 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 11:32:22 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 11:32:17 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.4,PodIP:10.244.2.107,StartTime:2020-05-14 11:32:17 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-05-14 11:32:21 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 containerd://196fa33472301374bcbec481a21c0372aa4bdd2cdeb5a980307b3be7749f8e46}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 14 11:32:23.176: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-deployment-k4h4q" for this suite. May 14 11:32:31.189: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 11:32:31.231: INFO: namespace: e2e-tests-deployment-k4h4q, resource: bindings, ignored listing per whitelist May 14 11:32:31.246: INFO: namespace e2e-tests-deployment-k4h4q deletion completed in 8.067691905s • [SLOW TEST:19.264 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 14 11:32:31.246: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43 [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod May 14 11:32:31.325: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 14 11:32:39.509: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-init-container-tbxv2" for this suite. May 14 11:32:45.629: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 11:32:45.638: INFO: namespace: e2e-tests-init-container-tbxv2, resource: bindings, ignored listing per whitelist May 14 11:32:45.739: INFO: namespace e2e-tests-init-container-tbxv2 deletion completed in 6.20015952s • [SLOW TEST:14.492 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 14 11:32:45.739: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65 [It] deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 14 11:32:46.073: INFO: Pod name cleanup-pod: Found 0 pods out of 1 May 14 11:32:51.078: INFO: Pod name cleanup-pod: Found 1 pods out of 1 STEP: ensuring each pod is running May 14 11:32:51.078: INFO: Creating deployment test-cleanup-deployment STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59 May 14 11:32:51.103: INFO: Deployment "test-cleanup-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment,GenerateName:,Namespace:e2e-tests-deployment-gg74n,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-gg74n/deployments/test-cleanup-deployment,UID:a4ee5428-95d6-11ea-99e8-0242ac110002,ResourceVersion:10522856,Generation:1,CreationTimestamp:2020-05-14 11:32:51 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[],ReadyReplicas:0,CollisionCount:nil,},} May 14 11:32:51.152: INFO: New ReplicaSet of Deployment "test-cleanup-deployment" is nil. May 14 11:32:51.152: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment": May 14 11:32:51.153: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-controller,GenerateName:,Namespace:e2e-tests-deployment-gg74n,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-gg74n/replicasets/test-cleanup-controller,UID:a1ea235d-95d6-11ea-99e8-0242ac110002,ResourceVersion:10522857,Generation:1,CreationTimestamp:2020-05-14 11:32:46 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 Deployment test-cleanup-deployment a4ee5428-95d6-11ea-99e8-0242ac110002 0xc001dddb67 0xc001dddb68}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} May 14 11:32:51.181: INFO: Pod "test-cleanup-controller-vrk6q" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-controller-vrk6q,GenerateName:test-cleanup-controller-,Namespace:e2e-tests-deployment-gg74n,SelfLink:/api/v1/namespaces/e2e-tests-deployment-gg74n/pods/test-cleanup-controller-vrk6q,UID:a1f26a55-95d6-11ea-99e8-0242ac110002,ResourceVersion:10522850,Generation:0,CreationTimestamp:2020-05-14 11:32:46 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-cleanup-controller a1ea235d-95d6-11ea-99e8-0242ac110002 0xc001bd80e7 0xc001bd80e8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-szqqc {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-szqqc,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-szqqc true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001bd8160} {node.kubernetes.io/unreachable Exists NoExecute 0xc001bd8180}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 11:32:46 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 11:32:49 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 11:32:49 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 11:32:46 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.4,PodIP:10.244.2.108,StartTime:2020-05-14 11:32:46 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-05-14 11:32:48 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://ce900f75442e6728d30525494f1b123f60066d25ff841a71ce4376162689b359}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 14 11:32:51.181: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-deployment-gg74n" for this suite. May 14 11:32:57.315: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 11:32:57.372: INFO: namespace: e2e-tests-deployment-gg74n, resource: bindings, ignored listing per whitelist May 14 11:32:57.383: INFO: namespace e2e-tests-deployment-gg74n deletion completed in 6.109808043s • [SLOW TEST:11.644 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 14 11:32:57.383: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook May 14 11:33:05.697: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 14 11:33:05.727: INFO: Pod pod-with-poststart-exec-hook still exists May 14 11:33:07.727: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 14 11:33:07.731: INFO: Pod pod-with-poststart-exec-hook still exists May 14 11:33:09.727: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 14 11:33:09.732: INFO: Pod pod-with-poststart-exec-hook still exists May 14 11:33:11.727: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 14 11:33:11.730: INFO: Pod pod-with-poststart-exec-hook still exists May 14 11:33:13.727: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 14 11:33:13.731: INFO: Pod pod-with-poststart-exec-hook still exists May 14 11:33:15.727: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 14 11:33:15.732: INFO: Pod pod-with-poststart-exec-hook still exists May 14 11:33:17.727: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 14 11:33:17.731: INFO: Pod pod-with-poststart-exec-hook still exists May 14 11:33:19.727: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 14 11:33:19.732: INFO: Pod pod-with-poststart-exec-hook still exists May 14 11:33:21.727: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 14 11:33:21.731: INFO: Pod pod-with-poststart-exec-hook still exists May 14 11:33:23.727: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 14 11:33:23.817: INFO: Pod pod-with-poststart-exec-hook still exists May 14 11:33:25.727: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 14 11:33:25.732: INFO: Pod pod-with-poststart-exec-hook still exists May 14 11:33:27.727: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 14 11:33:27.731: INFO: Pod pod-with-poststart-exec-hook still exists May 14 11:33:29.727: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 14 11:33:29.731: INFO: Pod pod-with-poststart-exec-hook still exists May 14 11:33:31.727: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 14 11:33:31.739: INFO: Pod pod-with-poststart-exec-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 14 11:33:31.739: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-sd6j8" for this suite. May 14 11:33:53.766: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 11:33:53.824: INFO: namespace: e2e-tests-container-lifecycle-hook-sd6j8, resource: bindings, ignored listing per whitelist May 14 11:33:53.833: INFO: namespace e2e-tests-container-lifecycle-hook-sd6j8 deletion completed in 22.091178454s • [SLOW TEST:56.450 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40 should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 14 11:33:53.834: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin May 14 11:33:54.118: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ca7e239b-95d6-11ea-9b22-0242ac110018" in namespace "e2e-tests-projected-zqj2s" to be "success or failure" May 14 11:33:54.122: INFO: Pod "downwardapi-volume-ca7e239b-95d6-11ea-9b22-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 3.440209ms May 14 11:33:56.126: INFO: Pod "downwardapi-volume-ca7e239b-95d6-11ea-9b22-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007905708s May 14 11:33:58.130: INFO: Pod "downwardapi-volume-ca7e239b-95d6-11ea-9b22-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012314017s STEP: Saw pod success May 14 11:33:58.131: INFO: Pod "downwardapi-volume-ca7e239b-95d6-11ea-9b22-0242ac110018" satisfied condition "success or failure" May 14 11:33:58.134: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-ca7e239b-95d6-11ea-9b22-0242ac110018 container client-container: STEP: delete the pod May 14 11:33:58.167: INFO: Waiting for pod downwardapi-volume-ca7e239b-95d6-11ea-9b22-0242ac110018 to disappear May 14 11:33:58.170: INFO: Pod downwardapi-volume-ca7e239b-95d6-11ea-9b22-0242ac110018 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 14 11:33:58.170: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-zqj2s" for this suite. May 14 11:34:04.247: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 11:34:04.290: INFO: namespace: e2e-tests-projected-zqj2s, resource: bindings, ignored listing per whitelist May 14 11:34:04.309: INFO: namespace e2e-tests-projected-zqj2s deletion completed in 6.111820094s • [SLOW TEST:10.476 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 14 11:34:04.309: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods STEP: Gathering metrics W0514 11:34:45.305914 6 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 14 11:34:45.305: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 14 11:34:45.306: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-rt795" for this suite. May 14 11:34:56.099: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 11:34:56.695: INFO: namespace: e2e-tests-gc-rt795, resource: bindings, ignored listing per whitelist May 14 11:34:56.719: INFO: namespace e2e-tests-gc-rt795 deletion completed in 11.41069269s • [SLOW TEST:52.410 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 14 11:34:56.720: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102 [It] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. May 14 11:34:57.762: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 14 11:34:57.769: INFO: Number of nodes with available pods: 0 May 14 11:34:57.769: INFO: Node hunter-worker is running more than one daemon pod May 14 11:34:58.982: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 14 11:34:59.381: INFO: Number of nodes with available pods: 0 May 14 11:34:59.381: INFO: Node hunter-worker is running more than one daemon pod May 14 11:34:59.773: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 14 11:34:59.776: INFO: Number of nodes with available pods: 0 May 14 11:34:59.776: INFO: Node hunter-worker is running more than one daemon pod May 14 11:35:01.026: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 14 11:35:01.067: INFO: Number of nodes with available pods: 0 May 14 11:35:01.067: INFO: Node hunter-worker is running more than one daemon pod May 14 11:35:01.772: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 14 11:35:01.775: INFO: Number of nodes with available pods: 0 May 14 11:35:01.775: INFO: Node hunter-worker is running more than one daemon pod May 14 11:35:02.975: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 14 11:35:02.978: INFO: Number of nodes with available pods: 0 May 14 11:35:02.978: INFO: Node hunter-worker is running more than one daemon pod May 14 11:35:03.836: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 14 11:35:03.839: INFO: Number of nodes with available pods: 0 May 14 11:35:03.839: INFO: Node hunter-worker is running more than one daemon pod May 14 11:35:04.807: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 14 11:35:04.811: INFO: Number of nodes with available pods: 2 May 14 11:35:04.811: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Stop a daemon pod, check that the daemon pod is revived. May 14 11:35:04.828: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 14 11:35:04.831: INFO: Number of nodes with available pods: 1 May 14 11:35:04.831: INFO: Node hunter-worker is running more than one daemon pod May 14 11:35:05.835: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 14 11:35:05.838: INFO: Number of nodes with available pods: 1 May 14 11:35:05.838: INFO: Node hunter-worker is running more than one daemon pod May 14 11:35:06.835: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 14 11:35:06.837: INFO: Number of nodes with available pods: 1 May 14 11:35:06.837: INFO: Node hunter-worker is running more than one daemon pod May 14 11:35:07.834: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 14 11:35:07.837: INFO: Number of nodes with available pods: 1 May 14 11:35:07.837: INFO: Node hunter-worker is running more than one daemon pod May 14 11:35:08.903: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 14 11:35:08.906: INFO: Number of nodes with available pods: 1 May 14 11:35:08.906: INFO: Node hunter-worker is running more than one daemon pod May 14 11:35:09.837: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 14 11:35:09.839: INFO: Number of nodes with available pods: 1 May 14 11:35:09.839: INFO: Node hunter-worker is running more than one daemon pod May 14 11:35:10.835: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 14 11:35:10.838: INFO: Number of nodes with available pods: 1 May 14 11:35:10.838: INFO: Node hunter-worker is running more than one daemon pod May 14 11:35:11.836: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 14 11:35:11.840: INFO: Number of nodes with available pods: 1 May 14 11:35:11.840: INFO: Node hunter-worker is running more than one daemon pod May 14 11:35:13.125: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 14 11:35:13.128: INFO: Number of nodes with available pods: 1 May 14 11:35:13.128: INFO: Node hunter-worker is running more than one daemon pod May 14 11:35:13.867: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 14 11:35:13.870: INFO: Number of nodes with available pods: 1 May 14 11:35:13.870: INFO: Node hunter-worker is running more than one daemon pod May 14 11:35:14.834: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 14 11:35:14.836: INFO: Number of nodes with available pods: 1 May 14 11:35:14.836: INFO: Node hunter-worker is running more than one daemon pod May 14 11:35:15.850: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 14 11:35:15.853: INFO: Number of nodes with available pods: 2 May 14 11:35:15.853: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-6k4qx, will wait for the garbage collector to delete the pods May 14 11:35:16.094: INFO: Deleting DaemonSet.extensions daemon-set took: 6.489323ms May 14 11:35:16.694: INFO: Terminating DaemonSet.extensions daemon-set pods took: 600.274885ms May 14 11:35:31.620: INFO: Number of nodes with available pods: 0 May 14 11:35:31.620: INFO: Number of running nodes: 0, number of available pods: 0 May 14 11:35:31.626: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-6k4qx/daemonsets","resourceVersion":"10523507"},"items":null} May 14 11:35:31.628: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-6k4qx/pods","resourceVersion":"10523507"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 14 11:35:31.638: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-daemonsets-6k4qx" for this suite. May 14 11:35:39.695: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 11:35:39.754: INFO: namespace: e2e-tests-daemonsets-6k4qx, resource: bindings, ignored listing per whitelist May 14 11:35:39.779: INFO: namespace e2e-tests-daemonsets-6k4qx deletion completed in 8.139346845s • [SLOW TEST:43.059 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 14 11:35:39.780: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102 [It] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. May 14 11:35:39.949: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 14 11:35:39.952: INFO: Number of nodes with available pods: 0 May 14 11:35:39.952: INFO: Node hunter-worker is running more than one daemon pod May 14 11:35:40.955: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 14 11:35:40.958: INFO: Number of nodes with available pods: 0 May 14 11:35:40.958: INFO: Node hunter-worker is running more than one daemon pod May 14 11:35:41.993: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 14 11:35:41.998: INFO: Number of nodes with available pods: 0 May 14 11:35:41.998: INFO: Node hunter-worker is running more than one daemon pod May 14 11:35:42.958: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 14 11:35:42.962: INFO: Number of nodes with available pods: 0 May 14 11:35:42.962: INFO: Node hunter-worker is running more than one daemon pod May 14 11:35:44.005: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 14 11:35:44.009: INFO: Number of nodes with available pods: 1 May 14 11:35:44.009: INFO: Node hunter-worker2 is running more than one daemon pod May 14 11:35:44.957: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 14 11:35:44.960: INFO: Number of nodes with available pods: 2 May 14 11:35:44.960: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. May 14 11:35:45.161: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 14 11:35:45.496: INFO: Number of nodes with available pods: 1 May 14 11:35:45.496: INFO: Node hunter-worker2 is running more than one daemon pod May 14 11:35:46.618: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 14 11:35:46.670: INFO: Number of nodes with available pods: 1 May 14 11:35:46.670: INFO: Node hunter-worker2 is running more than one daemon pod May 14 11:35:47.694: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 14 11:35:47.748: INFO: Number of nodes with available pods: 1 May 14 11:35:47.748: INFO: Node hunter-worker2 is running more than one daemon pod May 14 11:35:48.532: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 14 11:35:48.535: INFO: Number of nodes with available pods: 1 May 14 11:35:48.535: INFO: Node hunter-worker2 is running more than one daemon pod May 14 11:35:49.501: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 14 11:35:49.505: INFO: Number of nodes with available pods: 1 May 14 11:35:49.505: INFO: Node hunter-worker2 is running more than one daemon pod May 14 11:35:50.501: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 14 11:35:50.504: INFO: Number of nodes with available pods: 1 May 14 11:35:50.504: INFO: Node hunter-worker2 is running more than one daemon pod May 14 11:35:51.502: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 14 11:35:51.506: INFO: Number of nodes with available pods: 2 May 14 11:35:51.506: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Wait for the failed daemon pod to be completely deleted. [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-dxtwh, will wait for the garbage collector to delete the pods May 14 11:35:51.574: INFO: Deleting DaemonSet.extensions daemon-set took: 8.66297ms May 14 11:35:51.674: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.258148ms May 14 11:36:01.781: INFO: Number of nodes with available pods: 0 May 14 11:36:01.781: INFO: Number of running nodes: 0, number of available pods: 0 May 14 11:36:01.783: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-dxtwh/daemonsets","resourceVersion":"10523643"},"items":null} May 14 11:36:01.785: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-dxtwh/pods","resourceVersion":"10523643"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 14 11:36:01.792: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-daemonsets-dxtwh" for this suite. May 14 11:36:07.803: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 11:36:07.869: INFO: namespace: e2e-tests-daemonsets-dxtwh, resource: bindings, ignored listing per whitelist May 14 11:36:07.885: INFO: namespace e2e-tests-daemonsets-dxtwh deletion completed in 6.090535765s • [SLOW TEST:28.106 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 14 11:36:07.886: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts STEP: Waiting for a default service account to be provisioned in namespace [It] should test kubelet managed /etc/hosts file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Setting up the test STEP: Creating hostNetwork=false pod STEP: Creating hostNetwork=true pod STEP: Running the test STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false May 14 11:36:20.431: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-22qx9 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 14 11:36:20.431: INFO: >>> kubeConfig: /root/.kube/config I0514 11:36:20.467820 6 log.go:172] (0xc000921600) (0xc002454000) Create stream I0514 11:36:20.467856 6 log.go:172] (0xc000921600) (0xc002454000) Stream added, broadcasting: 1 I0514 11:36:20.470406 6 log.go:172] (0xc000921600) Reply frame received for 1 I0514 11:36:20.470457 6 log.go:172] (0xc000921600) (0xc002362000) Create stream I0514 11:36:20.470472 6 log.go:172] (0xc000921600) (0xc002362000) Stream added, broadcasting: 3 I0514 11:36:20.471238 6 log.go:172] (0xc000921600) Reply frame received for 3 I0514 11:36:20.471278 6 log.go:172] (0xc000921600) (0xc0024540a0) Create stream I0514 11:36:20.471285 6 log.go:172] (0xc000921600) (0xc0024540a0) Stream added, broadcasting: 5 I0514 11:36:20.472062 6 log.go:172] (0xc000921600) Reply frame received for 5 I0514 11:36:20.550910 6 log.go:172] (0xc000921600) Data frame received for 5 I0514 11:36:20.550962 6 log.go:172] (0xc0024540a0) (5) Data frame handling I0514 11:36:20.550990 6 log.go:172] (0xc000921600) Data frame received for 3 I0514 11:36:20.551001 6 log.go:172] (0xc002362000) (3) Data frame handling I0514 11:36:20.551015 6 log.go:172] (0xc002362000) (3) Data frame sent I0514 11:36:20.551026 6 log.go:172] (0xc000921600) Data frame received for 3 I0514 11:36:20.551036 6 log.go:172] (0xc002362000) (3) Data frame handling I0514 11:36:20.552404 6 log.go:172] (0xc000921600) Data frame received for 1 I0514 11:36:20.552432 6 log.go:172] (0xc002454000) (1) Data frame handling I0514 11:36:20.552446 6 log.go:172] (0xc002454000) (1) Data frame sent I0514 11:36:20.552463 6 log.go:172] (0xc000921600) (0xc002454000) Stream removed, broadcasting: 1 I0514 11:36:20.552488 6 log.go:172] (0xc000921600) Go away received I0514 11:36:20.552616 6 log.go:172] (0xc000921600) (0xc002454000) Stream removed, broadcasting: 1 I0514 11:36:20.552655 6 log.go:172] (0xc000921600) (0xc002362000) Stream removed, broadcasting: 3 I0514 11:36:20.552666 6 log.go:172] (0xc000921600) (0xc0024540a0) Stream removed, broadcasting: 5 May 14 11:36:20.552: INFO: Exec stderr: "" May 14 11:36:20.552: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-22qx9 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 14 11:36:20.552: INFO: >>> kubeConfig: /root/.kube/config I0514 11:36:20.588080 6 log.go:172] (0xc0005e7810) (0xc0019001e0) Create stream I0514 11:36:20.588114 6 log.go:172] (0xc0005e7810) (0xc0019001e0) Stream added, broadcasting: 1 I0514 11:36:20.591234 6 log.go:172] (0xc0005e7810) Reply frame received for 1 I0514 11:36:20.591284 6 log.go:172] (0xc0005e7810) (0xc0000fd7c0) Create stream I0514 11:36:20.591311 6 log.go:172] (0xc0005e7810) (0xc0000fd7c0) Stream added, broadcasting: 3 I0514 11:36:20.592114 6 log.go:172] (0xc0005e7810) Reply frame received for 3 I0514 11:36:20.592151 6 log.go:172] (0xc0005e7810) (0xc001900280) Create stream I0514 11:36:20.592160 6 log.go:172] (0xc0005e7810) (0xc001900280) Stream added, broadcasting: 5 I0514 11:36:20.592897 6 log.go:172] (0xc0005e7810) Reply frame received for 5 I0514 11:36:20.653491 6 log.go:172] (0xc0005e7810) Data frame received for 3 I0514 11:36:20.653525 6 log.go:172] (0xc0000fd7c0) (3) Data frame handling I0514 11:36:20.653537 6 log.go:172] (0xc0000fd7c0) (3) Data frame sent I0514 11:36:20.653561 6 log.go:172] (0xc0005e7810) Data frame received for 3 I0514 11:36:20.653575 6 log.go:172] (0xc0000fd7c0) (3) Data frame handling I0514 11:36:20.653589 6 log.go:172] (0xc0005e7810) Data frame received for 5 I0514 11:36:20.653598 6 log.go:172] (0xc001900280) (5) Data frame handling I0514 11:36:20.654941 6 log.go:172] (0xc0005e7810) Data frame received for 1 I0514 11:36:20.654950 6 log.go:172] (0xc0019001e0) (1) Data frame handling I0514 11:36:20.654960 6 log.go:172] (0xc0019001e0) (1) Data frame sent I0514 11:36:20.654970 6 log.go:172] (0xc0005e7810) (0xc0019001e0) Stream removed, broadcasting: 1 I0514 11:36:20.655068 6 log.go:172] (0xc0005e7810) (0xc0019001e0) Stream removed, broadcasting: 1 I0514 11:36:20.655084 6 log.go:172] (0xc0005e7810) (0xc0000fd7c0) Stream removed, broadcasting: 3 I0514 11:36:20.655091 6 log.go:172] (0xc0005e7810) (0xc001900280) Stream removed, broadcasting: 5 May 14 11:36:20.655: INFO: Exec stderr: "" May 14 11:36:20.655: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-22qx9 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} I0514 11:36:20.655151 6 log.go:172] (0xc0005e7810) Go away received May 14 11:36:20.655: INFO: >>> kubeConfig: /root/.kube/config I0514 11:36:20.684529 6 log.go:172] (0xc000ce04d0) (0xc0020bc1e0) Create stream I0514 11:36:20.684561 6 log.go:172] (0xc000ce04d0) (0xc0020bc1e0) Stream added, broadcasting: 1 I0514 11:36:20.687144 6 log.go:172] (0xc000ce04d0) Reply frame received for 1 I0514 11:36:20.687187 6 log.go:172] (0xc000ce04d0) (0xc001900320) Create stream I0514 11:36:20.687202 6 log.go:172] (0xc000ce04d0) (0xc001900320) Stream added, broadcasting: 3 I0514 11:36:20.688485 6 log.go:172] (0xc000ce04d0) Reply frame received for 3 I0514 11:36:20.688532 6 log.go:172] (0xc000ce04d0) (0xc0019003c0) Create stream I0514 11:36:20.688548 6 log.go:172] (0xc000ce04d0) (0xc0019003c0) Stream added, broadcasting: 5 I0514 11:36:20.689888 6 log.go:172] (0xc000ce04d0) Reply frame received for 5 I0514 11:36:20.761942 6 log.go:172] (0xc000ce04d0) Data frame received for 3 I0514 11:36:20.761969 6 log.go:172] (0xc001900320) (3) Data frame handling I0514 11:36:20.761980 6 log.go:172] (0xc001900320) (3) Data frame sent I0514 11:36:20.761986 6 log.go:172] (0xc000ce04d0) Data frame received for 3 I0514 11:36:20.761992 6 log.go:172] (0xc001900320) (3) Data frame handling I0514 11:36:20.762304 6 log.go:172] (0xc000ce04d0) Data frame received for 5 I0514 11:36:20.762318 6 log.go:172] (0xc0019003c0) (5) Data frame handling I0514 11:36:20.764243 6 log.go:172] (0xc000ce04d0) Data frame received for 1 I0514 11:36:20.764272 6 log.go:172] (0xc0020bc1e0) (1) Data frame handling I0514 11:36:20.764286 6 log.go:172] (0xc0020bc1e0) (1) Data frame sent I0514 11:36:20.764303 6 log.go:172] (0xc000ce04d0) (0xc0020bc1e0) Stream removed, broadcasting: 1 I0514 11:36:20.764326 6 log.go:172] (0xc000ce04d0) Go away received I0514 11:36:20.764449 6 log.go:172] (0xc000ce04d0) (0xc0020bc1e0) Stream removed, broadcasting: 1 I0514 11:36:20.764467 6 log.go:172] (0xc000ce04d0) (0xc001900320) Stream removed, broadcasting: 3 I0514 11:36:20.764475 6 log.go:172] (0xc000ce04d0) (0xc0019003c0) Stream removed, broadcasting: 5 May 14 11:36:20.764: INFO: Exec stderr: "" May 14 11:36:20.764: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-22qx9 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 14 11:36:20.764: INFO: >>> kubeConfig: /root/.kube/config I0514 11:36:20.790345 6 log.go:172] (0xc0022ea2c0) (0xc00223c1e0) Create stream I0514 11:36:20.790368 6 log.go:172] (0xc0022ea2c0) (0xc00223c1e0) Stream added, broadcasting: 1 I0514 11:36:20.792737 6 log.go:172] (0xc0022ea2c0) Reply frame received for 1 I0514 11:36:20.792813 6 log.go:172] (0xc0022ea2c0) (0xc0024541e0) Create stream I0514 11:36:20.792860 6 log.go:172] (0xc0022ea2c0) (0xc0024541e0) Stream added, broadcasting: 3 I0514 11:36:20.794235 6 log.go:172] (0xc0022ea2c0) Reply frame received for 3 I0514 11:36:20.794296 6 log.go:172] (0xc0022ea2c0) (0xc0020bc320) Create stream I0514 11:36:20.794311 6 log.go:172] (0xc0022ea2c0) (0xc0020bc320) Stream added, broadcasting: 5 I0514 11:36:20.795335 6 log.go:172] (0xc0022ea2c0) Reply frame received for 5 I0514 11:36:20.861957 6 log.go:172] (0xc0022ea2c0) Data frame received for 5 I0514 11:36:20.862018 6 log.go:172] (0xc0020bc320) (5) Data frame handling I0514 11:36:20.862063 6 log.go:172] (0xc0022ea2c0) Data frame received for 3 I0514 11:36:20.862131 6 log.go:172] (0xc0024541e0) (3) Data frame handling I0514 11:36:20.862166 6 log.go:172] (0xc0024541e0) (3) Data frame sent I0514 11:36:20.862184 6 log.go:172] (0xc0022ea2c0) Data frame received for 3 I0514 11:36:20.862211 6 log.go:172] (0xc0024541e0) (3) Data frame handling I0514 11:36:20.863961 6 log.go:172] (0xc0022ea2c0) Data frame received for 1 I0514 11:36:20.863990 6 log.go:172] (0xc00223c1e0) (1) Data frame handling I0514 11:36:20.864011 6 log.go:172] (0xc00223c1e0) (1) Data frame sent I0514 11:36:20.864036 6 log.go:172] (0xc0022ea2c0) (0xc00223c1e0) Stream removed, broadcasting: 1 I0514 11:36:20.864082 6 log.go:172] (0xc0022ea2c0) Go away received I0514 11:36:20.864157 6 log.go:172] (0xc0022ea2c0) (0xc00223c1e0) Stream removed, broadcasting: 1 I0514 11:36:20.864176 6 log.go:172] (0xc0022ea2c0) (0xc0024541e0) Stream removed, broadcasting: 3 I0514 11:36:20.864194 6 log.go:172] (0xc0022ea2c0) (0xc0020bc320) Stream removed, broadcasting: 5 May 14 11:36:20.864: INFO: Exec stderr: "" STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount May 14 11:36:20.864: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-22qx9 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 14 11:36:20.864: INFO: >>> kubeConfig: /root/.kube/config I0514 11:36:20.895805 6 log.go:172] (0xc0022ea790) (0xc00223c460) Create stream I0514 11:36:20.895846 6 log.go:172] (0xc0022ea790) (0xc00223c460) Stream added, broadcasting: 1 I0514 11:36:20.898375 6 log.go:172] (0xc0022ea790) Reply frame received for 1 I0514 11:36:20.898421 6 log.go:172] (0xc0022ea790) (0xc002454280) Create stream I0514 11:36:20.898435 6 log.go:172] (0xc0022ea790) (0xc002454280) Stream added, broadcasting: 3 I0514 11:36:20.899285 6 log.go:172] (0xc0022ea790) Reply frame received for 3 I0514 11:36:20.899317 6 log.go:172] (0xc0022ea790) (0xc0020bc3c0) Create stream I0514 11:36:20.899330 6 log.go:172] (0xc0022ea790) (0xc0020bc3c0) Stream added, broadcasting: 5 I0514 11:36:20.900304 6 log.go:172] (0xc0022ea790) Reply frame received for 5 I0514 11:36:20.969707 6 log.go:172] (0xc0022ea790) Data frame received for 3 I0514 11:36:20.969753 6 log.go:172] (0xc002454280) (3) Data frame handling I0514 11:36:20.969787 6 log.go:172] (0xc002454280) (3) Data frame sent I0514 11:36:20.969808 6 log.go:172] (0xc0022ea790) Data frame received for 3 I0514 11:36:20.969828 6 log.go:172] (0xc002454280) (3) Data frame handling I0514 11:36:20.969869 6 log.go:172] (0xc0022ea790) Data frame received for 5 I0514 11:36:20.969891 6 log.go:172] (0xc0020bc3c0) (5) Data frame handling I0514 11:36:20.971429 6 log.go:172] (0xc0022ea790) Data frame received for 1 I0514 11:36:20.971455 6 log.go:172] (0xc00223c460) (1) Data frame handling I0514 11:36:20.971472 6 log.go:172] (0xc00223c460) (1) Data frame sent I0514 11:36:20.971588 6 log.go:172] (0xc0022ea790) (0xc00223c460) Stream removed, broadcasting: 1 I0514 11:36:20.971745 6 log.go:172] (0xc0022ea790) (0xc00223c460) Stream removed, broadcasting: 1 I0514 11:36:20.971763 6 log.go:172] (0xc0022ea790) (0xc002454280) Stream removed, broadcasting: 3 I0514 11:36:20.971877 6 log.go:172] (0xc0022ea790) Go away received I0514 11:36:20.972009 6 log.go:172] (0xc0022ea790) (0xc0020bc3c0) Stream removed, broadcasting: 5 May 14 11:36:20.972: INFO: Exec stderr: "" May 14 11:36:20.972: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-22qx9 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 14 11:36:20.972: INFO: >>> kubeConfig: /root/.kube/config I0514 11:36:21.003992 6 log.go:172] (0xc000c7c2c0) (0xc002362320) Create stream I0514 11:36:21.004033 6 log.go:172] (0xc000c7c2c0) (0xc002362320) Stream added, broadcasting: 1 I0514 11:36:21.007007 6 log.go:172] (0xc000c7c2c0) Reply frame received for 1 I0514 11:36:21.007053 6 log.go:172] (0xc000c7c2c0) (0xc001900460) Create stream I0514 11:36:21.007070 6 log.go:172] (0xc000c7c2c0) (0xc001900460) Stream added, broadcasting: 3 I0514 11:36:21.007859 6 log.go:172] (0xc000c7c2c0) Reply frame received for 3 I0514 11:36:21.007899 6 log.go:172] (0xc000c7c2c0) (0xc0020bc460) Create stream I0514 11:36:21.007920 6 log.go:172] (0xc000c7c2c0) (0xc0020bc460) Stream added, broadcasting: 5 I0514 11:36:21.008913 6 log.go:172] (0xc000c7c2c0) Reply frame received for 5 I0514 11:36:21.073836 6 log.go:172] (0xc000c7c2c0) Data frame received for 3 I0514 11:36:21.073865 6 log.go:172] (0xc001900460) (3) Data frame handling I0514 11:36:21.073874 6 log.go:172] (0xc001900460) (3) Data frame sent I0514 11:36:21.073883 6 log.go:172] (0xc000c7c2c0) Data frame received for 3 I0514 11:36:21.073889 6 log.go:172] (0xc001900460) (3) Data frame handling I0514 11:36:21.073911 6 log.go:172] (0xc000c7c2c0) Data frame received for 5 I0514 11:36:21.073942 6 log.go:172] (0xc0020bc460) (5) Data frame handling I0514 11:36:21.075203 6 log.go:172] (0xc000c7c2c0) Data frame received for 1 I0514 11:36:21.075225 6 log.go:172] (0xc002362320) (1) Data frame handling I0514 11:36:21.075232 6 log.go:172] (0xc002362320) (1) Data frame sent I0514 11:36:21.075254 6 log.go:172] (0xc000c7c2c0) (0xc002362320) Stream removed, broadcasting: 1 I0514 11:36:21.075271 6 log.go:172] (0xc000c7c2c0) Go away received I0514 11:36:21.075358 6 log.go:172] (0xc000c7c2c0) (0xc002362320) Stream removed, broadcasting: 1 I0514 11:36:21.075384 6 log.go:172] (0xc000c7c2c0) (0xc001900460) Stream removed, broadcasting: 3 I0514 11:36:21.075396 6 log.go:172] (0xc000c7c2c0) (0xc0020bc460) Stream removed, broadcasting: 5 May 14 11:36:21.075: INFO: Exec stderr: "" STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true May 14 11:36:21.075: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-22qx9 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 14 11:36:21.075: INFO: >>> kubeConfig: /root/.kube/config I0514 11:36:21.101726 6 log.go:172] (0xc000ce09a0) (0xc0020bc6e0) Create stream I0514 11:36:21.101752 6 log.go:172] (0xc000ce09a0) (0xc0020bc6e0) Stream added, broadcasting: 1 I0514 11:36:21.110951 6 log.go:172] (0xc000ce09a0) Reply frame received for 1 I0514 11:36:21.111009 6 log.go:172] (0xc000ce09a0) (0xc002454320) Create stream I0514 11:36:21.111023 6 log.go:172] (0xc000ce09a0) (0xc002454320) Stream added, broadcasting: 3 I0514 11:36:21.113668 6 log.go:172] (0xc000ce09a0) Reply frame received for 3 I0514 11:36:21.113745 6 log.go:172] (0xc000ce09a0) (0xc0020bc780) Create stream I0514 11:36:21.113781 6 log.go:172] (0xc000ce09a0) (0xc0020bc780) Stream added, broadcasting: 5 I0514 11:36:21.116282 6 log.go:172] (0xc000ce09a0) Reply frame received for 5 I0514 11:36:21.162408 6 log.go:172] (0xc000ce09a0) Data frame received for 5 I0514 11:36:21.162449 6 log.go:172] (0xc0020bc780) (5) Data frame handling I0514 11:36:21.162482 6 log.go:172] (0xc000ce09a0) Data frame received for 3 I0514 11:36:21.162496 6 log.go:172] (0xc002454320) (3) Data frame handling I0514 11:36:21.162512 6 log.go:172] (0xc002454320) (3) Data frame sent I0514 11:36:21.162526 6 log.go:172] (0xc000ce09a0) Data frame received for 3 I0514 11:36:21.162538 6 log.go:172] (0xc002454320) (3) Data frame handling I0514 11:36:21.163754 6 log.go:172] (0xc000ce09a0) Data frame received for 1 I0514 11:36:21.163780 6 log.go:172] (0xc0020bc6e0) (1) Data frame handling I0514 11:36:21.163792 6 log.go:172] (0xc0020bc6e0) (1) Data frame sent I0514 11:36:21.163803 6 log.go:172] (0xc000ce09a0) (0xc0020bc6e0) Stream removed, broadcasting: 1 I0514 11:36:21.163851 6 log.go:172] (0xc000ce09a0) Go away received I0514 11:36:21.163903 6 log.go:172] (0xc000ce09a0) (0xc0020bc6e0) Stream removed, broadcasting: 1 I0514 11:36:21.163930 6 log.go:172] (0xc000ce09a0) (0xc002454320) Stream removed, broadcasting: 3 I0514 11:36:21.163937 6 log.go:172] (0xc000ce09a0) (0xc0020bc780) Stream removed, broadcasting: 5 May 14 11:36:21.163: INFO: Exec stderr: "" May 14 11:36:21.163: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-22qx9 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 14 11:36:21.164: INFO: >>> kubeConfig: /root/.kube/config I0514 11:36:21.193828 6 log.go:172] (0xc000921ce0) (0xc002454640) Create stream I0514 11:36:21.193863 6 log.go:172] (0xc000921ce0) (0xc002454640) Stream added, broadcasting: 1 I0514 11:36:21.195534 6 log.go:172] (0xc000921ce0) Reply frame received for 1 I0514 11:36:21.195557 6 log.go:172] (0xc000921ce0) (0xc0019005a0) Create stream I0514 11:36:21.195565 6 log.go:172] (0xc000921ce0) (0xc0019005a0) Stream added, broadcasting: 3 I0514 11:36:21.196429 6 log.go:172] (0xc000921ce0) Reply frame received for 3 I0514 11:36:21.196462 6 log.go:172] (0xc000921ce0) (0xc001900640) Create stream I0514 11:36:21.196475 6 log.go:172] (0xc000921ce0) (0xc001900640) Stream added, broadcasting: 5 I0514 11:36:21.197737 6 log.go:172] (0xc000921ce0) Reply frame received for 5 I0514 11:36:21.247480 6 log.go:172] (0xc000921ce0) Data frame received for 5 I0514 11:36:21.247530 6 log.go:172] (0xc001900640) (5) Data frame handling I0514 11:36:21.247587 6 log.go:172] (0xc000921ce0) Data frame received for 3 I0514 11:36:21.247731 6 log.go:172] (0xc0019005a0) (3) Data frame handling I0514 11:36:21.247770 6 log.go:172] (0xc0019005a0) (3) Data frame sent I0514 11:36:21.247797 6 log.go:172] (0xc000921ce0) Data frame received for 3 I0514 11:36:21.247833 6 log.go:172] (0xc0019005a0) (3) Data frame handling I0514 11:36:21.249311 6 log.go:172] (0xc000921ce0) Data frame received for 1 I0514 11:36:21.249364 6 log.go:172] (0xc002454640) (1) Data frame handling I0514 11:36:21.249397 6 log.go:172] (0xc002454640) (1) Data frame sent I0514 11:36:21.249444 6 log.go:172] (0xc000921ce0) (0xc002454640) Stream removed, broadcasting: 1 I0514 11:36:21.249538 6 log.go:172] (0xc000921ce0) Go away received I0514 11:36:21.249580 6 log.go:172] (0xc000921ce0) (0xc002454640) Stream removed, broadcasting: 1 I0514 11:36:21.249596 6 log.go:172] (0xc000921ce0) (0xc0019005a0) Stream removed, broadcasting: 3 I0514 11:36:21.249608 6 log.go:172] (0xc000921ce0) (0xc001900640) Stream removed, broadcasting: 5 May 14 11:36:21.249: INFO: Exec stderr: "" May 14 11:36:21.249: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-22qx9 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 14 11:36:21.249: INFO: >>> kubeConfig: /root/.kube/config I0514 11:36:21.278098 6 log.go:172] (0xc000c7c790) (0xc002362640) Create stream I0514 11:36:21.278131 6 log.go:172] (0xc000c7c790) (0xc002362640) Stream added, broadcasting: 1 I0514 11:36:21.281356 6 log.go:172] (0xc000c7c790) Reply frame received for 1 I0514 11:36:21.281403 6 log.go:172] (0xc000c7c790) (0xc0024546e0) Create stream I0514 11:36:21.281419 6 log.go:172] (0xc000c7c790) (0xc0024546e0) Stream added, broadcasting: 3 I0514 11:36:21.282449 6 log.go:172] (0xc000c7c790) Reply frame received for 3 I0514 11:36:21.282498 6 log.go:172] (0xc000c7c790) (0xc00223c5a0) Create stream I0514 11:36:21.282517 6 log.go:172] (0xc000c7c790) (0xc00223c5a0) Stream added, broadcasting: 5 I0514 11:36:21.283391 6 log.go:172] (0xc000c7c790) Reply frame received for 5 I0514 11:36:21.356190 6 log.go:172] (0xc000c7c790) Data frame received for 5 I0514 11:36:21.356229 6 log.go:172] (0xc00223c5a0) (5) Data frame handling I0514 11:36:21.356260 6 log.go:172] (0xc000c7c790) Data frame received for 3 I0514 11:36:21.356276 6 log.go:172] (0xc0024546e0) (3) Data frame handling I0514 11:36:21.356290 6 log.go:172] (0xc0024546e0) (3) Data frame sent I0514 11:36:21.356301 6 log.go:172] (0xc000c7c790) Data frame received for 3 I0514 11:36:21.356314 6 log.go:172] (0xc0024546e0) (3) Data frame handling I0514 11:36:21.357762 6 log.go:172] (0xc000c7c790) Data frame received for 1 I0514 11:36:21.357795 6 log.go:172] (0xc002362640) (1) Data frame handling I0514 11:36:21.357816 6 log.go:172] (0xc002362640) (1) Data frame sent I0514 11:36:21.357838 6 log.go:172] (0xc000c7c790) (0xc002362640) Stream removed, broadcasting: 1 I0514 11:36:21.357856 6 log.go:172] (0xc000c7c790) Go away received I0514 11:36:21.357952 6 log.go:172] (0xc000c7c790) (0xc002362640) Stream removed, broadcasting: 1 I0514 11:36:21.357993 6 log.go:172] (0xc000c7c790) (0xc0024546e0) Stream removed, broadcasting: 3 I0514 11:36:21.358019 6 log.go:172] (0xc000c7c790) (0xc00223c5a0) Stream removed, broadcasting: 5 May 14 11:36:21.358: INFO: Exec stderr: "" May 14 11:36:21.358: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-22qx9 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 14 11:36:21.358: INFO: >>> kubeConfig: /root/.kube/config I0514 11:36:21.387834 6 log.go:172] (0xc000c7cc60) (0xc0023628c0) Create stream I0514 11:36:21.387863 6 log.go:172] (0xc000c7cc60) (0xc0023628c0) Stream added, broadcasting: 1 I0514 11:36:21.389671 6 log.go:172] (0xc000c7cc60) Reply frame received for 1 I0514 11:36:21.389711 6 log.go:172] (0xc000c7cc60) (0xc00223c6e0) Create stream I0514 11:36:21.389727 6 log.go:172] (0xc000c7cc60) (0xc00223c6e0) Stream added, broadcasting: 3 I0514 11:36:21.390569 6 log.go:172] (0xc000c7cc60) Reply frame received for 3 I0514 11:36:21.390607 6 log.go:172] (0xc000c7cc60) (0xc001900780) Create stream I0514 11:36:21.390620 6 log.go:172] (0xc000c7cc60) (0xc001900780) Stream added, broadcasting: 5 I0514 11:36:21.391577 6 log.go:172] (0xc000c7cc60) Reply frame received for 5 I0514 11:36:21.458958 6 log.go:172] (0xc000c7cc60) Data frame received for 5 I0514 11:36:21.458996 6 log.go:172] (0xc001900780) (5) Data frame handling I0514 11:36:21.459029 6 log.go:172] (0xc000c7cc60) Data frame received for 3 I0514 11:36:21.459041 6 log.go:172] (0xc00223c6e0) (3) Data frame handling I0514 11:36:21.459053 6 log.go:172] (0xc00223c6e0) (3) Data frame sent I0514 11:36:21.459069 6 log.go:172] (0xc000c7cc60) Data frame received for 3 I0514 11:36:21.459078 6 log.go:172] (0xc00223c6e0) (3) Data frame handling I0514 11:36:21.460413 6 log.go:172] (0xc000c7cc60) Data frame received for 1 I0514 11:36:21.460451 6 log.go:172] (0xc0023628c0) (1) Data frame handling I0514 11:36:21.460472 6 log.go:172] (0xc0023628c0) (1) Data frame sent I0514 11:36:21.460490 6 log.go:172] (0xc000c7cc60) (0xc0023628c0) Stream removed, broadcasting: 1 I0514 11:36:21.460546 6 log.go:172] (0xc000c7cc60) Go away received I0514 11:36:21.460580 6 log.go:172] (0xc000c7cc60) (0xc0023628c0) Stream removed, broadcasting: 1 I0514 11:36:21.460668 6 log.go:172] Streams opened: 2, map[spdy.StreamId]*spdystream.Stream{0x3:(*spdystream.Stream)(0xc00223c6e0), 0x5:(*spdystream.Stream)(0xc001900780)} I0514 11:36:21.460712 6 log.go:172] (0xc000c7cc60) (0xc00223c6e0) Stream removed, broadcasting: 3 I0514 11:36:21.460739 6 log.go:172] (0xc000c7cc60) (0xc001900780) Stream removed, broadcasting: 5 May 14 11:36:21.460: INFO: Exec stderr: "" [AfterEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 14 11:36:21.460: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-e2e-kubelet-etc-hosts-22qx9" for this suite. May 14 11:37:13.484: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 11:37:13.528: INFO: namespace: e2e-tests-e2e-kubelet-etc-hosts-22qx9, resource: bindings, ignored listing per whitelist May 14 11:37:13.579: INFO: namespace e2e-tests-e2e-kubelet-etc-hosts-22qx9 deletion completed in 52.114485642s • [SLOW TEST:65.693 seconds] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should test kubelet managed /etc/hosts file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 14 11:37:13.579: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin May 14 11:37:13.677: INFO: Waiting up to 5m0s for pod "downwardapi-volume-4171e460-95d7-11ea-9b22-0242ac110018" in namespace "e2e-tests-downward-api-sh8m5" to be "success or failure" May 14 11:37:13.727: INFO: Pod "downwardapi-volume-4171e460-95d7-11ea-9b22-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 49.819024ms May 14 11:37:15.730: INFO: Pod "downwardapi-volume-4171e460-95d7-11ea-9b22-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.052651278s May 14 11:37:17.733: INFO: Pod "downwardapi-volume-4171e460-95d7-11ea-9b22-0242ac110018": Phase="Running", Reason="", readiness=true. Elapsed: 4.05632498s May 14 11:37:19.736: INFO: Pod "downwardapi-volume-4171e460-95d7-11ea-9b22-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.059133458s STEP: Saw pod success May 14 11:37:19.736: INFO: Pod "downwardapi-volume-4171e460-95d7-11ea-9b22-0242ac110018" satisfied condition "success or failure" May 14 11:37:19.739: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-4171e460-95d7-11ea-9b22-0242ac110018 container client-container: STEP: delete the pod May 14 11:37:19.776: INFO: Waiting for pod downwardapi-volume-4171e460-95d7-11ea-9b22-0242ac110018 to disappear May 14 11:37:19.789: INFO: Pod downwardapi-volume-4171e460-95d7-11ea-9b22-0242ac110018 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 14 11:37:19.789: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-sh8m5" for this suite. May 14 11:37:25.804: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 11:37:25.871: INFO: namespace: e2e-tests-downward-api-sh8m5, resource: bindings, ignored listing per whitelist May 14 11:37:25.885: INFO: namespace e2e-tests-downward-api-sh8m5 deletion completed in 6.091601758s • [SLOW TEST:12.306 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 14 11:37:25.886: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should write entries to /etc/hosts [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 14 11:37:30.049: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubelet-test-9mlx8" for this suite. May 14 11:38:14.074: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 11:38:14.247: INFO: namespace: e2e-tests-kubelet-test-9mlx8, resource: bindings, ignored listing per whitelist May 14 11:38:14.247: INFO: namespace e2e-tests-kubelet-test-9mlx8 deletion completed in 44.19535825s • [SLOW TEST:48.361 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when scheduling a busybox Pod with hostAliases /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:136 should write entries to /etc/hosts [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 14 11:38:14.247: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-map-659b02a2-95d7-11ea-9b22-0242ac110018 STEP: Creating a pod to test consume configMaps May 14 11:38:14.362: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-659d71d5-95d7-11ea-9b22-0242ac110018" in namespace "e2e-tests-projected-qp7g4" to be "success or failure" May 14 11:38:14.376: INFO: Pod "pod-projected-configmaps-659d71d5-95d7-11ea-9b22-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 13.962089ms May 14 11:38:16.497: INFO: Pod "pod-projected-configmaps-659d71d5-95d7-11ea-9b22-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.134901967s May 14 11:38:18.502: INFO: Pod "pod-projected-configmaps-659d71d5-95d7-11ea-9b22-0242ac110018": Phase="Running", Reason="", readiness=true. Elapsed: 4.139149595s May 14 11:38:20.504: INFO: Pod "pod-projected-configmaps-659d71d5-95d7-11ea-9b22-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.142013709s STEP: Saw pod success May 14 11:38:20.505: INFO: Pod "pod-projected-configmaps-659d71d5-95d7-11ea-9b22-0242ac110018" satisfied condition "success or failure" May 14 11:38:20.507: INFO: Trying to get logs from node hunter-worker2 pod pod-projected-configmaps-659d71d5-95d7-11ea-9b22-0242ac110018 container projected-configmap-volume-test: STEP: delete the pod May 14 11:38:20.544: INFO: Waiting for pod pod-projected-configmaps-659d71d5-95d7-11ea-9b22-0242ac110018 to disappear May 14 11:38:20.557: INFO: Pod pod-projected-configmaps-659d71d5-95d7-11ea-9b22-0242ac110018 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 14 11:38:20.557: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-qp7g4" for this suite. May 14 11:38:26.580: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 11:38:26.602: INFO: namespace: e2e-tests-projected-qp7g4, resource: bindings, ignored listing per whitelist May 14 11:38:26.666: INFO: namespace e2e-tests-projected-qp7g4 deletion completed in 6.105740152s • [SLOW TEST:12.418 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Proxy server should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 14 11:38:26.666: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Starting the proxy May 14 11:38:26.786: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix236516069/test' STEP: retrieving proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 14 11:38:26.875: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-tdk2f" for this suite. May 14 11:38:32.917: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 11:38:32.960: INFO: namespace: e2e-tests-kubectl-tdk2f, resource: bindings, ignored listing per whitelist May 14 11:38:32.987: INFO: namespace e2e-tests-kubectl-tdk2f deletion completed in 6.100136935s • [SLOW TEST:6.321 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Proxy server /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 14 11:38:32.987: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-70ca4ccc-95d7-11ea-9b22-0242ac110018 STEP: Creating a pod to test consume configMaps May 14 11:38:33.147: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-70cd0c84-95d7-11ea-9b22-0242ac110018" in namespace "e2e-tests-projected-ng4fg" to be "success or failure" May 14 11:38:33.150: INFO: Pod "pod-projected-configmaps-70cd0c84-95d7-11ea-9b22-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 3.593289ms May 14 11:38:35.180: INFO: Pod "pod-projected-configmaps-70cd0c84-95d7-11ea-9b22-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032900376s May 14 11:38:37.183: INFO: Pod "pod-projected-configmaps-70cd0c84-95d7-11ea-9b22-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.036227704s STEP: Saw pod success May 14 11:38:37.183: INFO: Pod "pod-projected-configmaps-70cd0c84-95d7-11ea-9b22-0242ac110018" satisfied condition "success or failure" May 14 11:38:37.186: INFO: Trying to get logs from node hunter-worker pod pod-projected-configmaps-70cd0c84-95d7-11ea-9b22-0242ac110018 container projected-configmap-volume-test: STEP: delete the pod May 14 11:38:37.212: INFO: Waiting for pod pod-projected-configmaps-70cd0c84-95d7-11ea-9b22-0242ac110018 to disappear May 14 11:38:37.527: INFO: Pod pod-projected-configmaps-70cd0c84-95d7-11ea-9b22-0242ac110018 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 14 11:38:37.527: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-ng4fg" for this suite. May 14 11:38:43.766: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 11:38:43.802: INFO: namespace: e2e-tests-projected-ng4fg, resource: bindings, ignored listing per whitelist May 14 11:38:43.874: INFO: namespace e2e-tests-projected-ng4fg deletion completed in 6.342871822s • [SLOW TEST:10.887 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 14 11:38:43.874: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin May 14 11:38:43.988: INFO: Waiting up to 5m0s for pod "downwardapi-volume-774683eb-95d7-11ea-9b22-0242ac110018" in namespace "e2e-tests-projected-t5gjw" to be "success or failure" May 14 11:38:43.991: INFO: Pod "downwardapi-volume-774683eb-95d7-11ea-9b22-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.203657ms May 14 11:38:46.084: INFO: Pod "downwardapi-volume-774683eb-95d7-11ea-9b22-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.095517004s May 14 11:38:48.102: INFO: Pod "downwardapi-volume-774683eb-95d7-11ea-9b22-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.113424772s STEP: Saw pod success May 14 11:38:48.102: INFO: Pod "downwardapi-volume-774683eb-95d7-11ea-9b22-0242ac110018" satisfied condition "success or failure" May 14 11:38:48.104: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-774683eb-95d7-11ea-9b22-0242ac110018 container client-container: STEP: delete the pod May 14 11:38:48.135: INFO: Waiting for pod downwardapi-volume-774683eb-95d7-11ea-9b22-0242ac110018 to disappear May 14 11:38:48.139: INFO: Pod downwardapi-volume-774683eb-95d7-11ea-9b22-0242ac110018 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 14 11:38:48.139: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-t5gjw" for this suite. May 14 11:38:54.192: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 11:38:54.257: INFO: namespace: e2e-tests-projected-t5gjw, resource: bindings, ignored listing per whitelist May 14 11:38:54.268: INFO: namespace e2e-tests-projected-t5gjw deletion completed in 6.125757552s • [SLOW TEST:10.394 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl expose should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 14 11:38:54.268: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating Redis RC May 14 11:38:54.374: INFO: namespace e2e-tests-kubectl-ddqsj May 14 11:38:54.374: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-ddqsj' May 14 11:38:57.032: INFO: stderr: "" May 14 11:38:57.032: INFO: stdout: "replicationcontroller/redis-master created\n" STEP: Waiting for Redis master to start. May 14 11:38:58.036: INFO: Selector matched 1 pods for map[app:redis] May 14 11:38:58.037: INFO: Found 0 / 1 May 14 11:38:59.036: INFO: Selector matched 1 pods for map[app:redis] May 14 11:38:59.036: INFO: Found 0 / 1 May 14 11:39:00.036: INFO: Selector matched 1 pods for map[app:redis] May 14 11:39:00.036: INFO: Found 0 / 1 May 14 11:39:01.036: INFO: Selector matched 1 pods for map[app:redis] May 14 11:39:01.036: INFO: Found 0 / 1 May 14 11:39:02.036: INFO: Selector matched 1 pods for map[app:redis] May 14 11:39:02.036: INFO: Found 1 / 1 May 14 11:39:02.036: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 May 14 11:39:02.039: INFO: Selector matched 1 pods for map[app:redis] May 14 11:39:02.039: INFO: ForEach: Found 1 pods from the filter. Now looping through them. May 14 11:39:02.039: INFO: wait on redis-master startup in e2e-tests-kubectl-ddqsj May 14 11:39:02.039: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-4r54r redis-master --namespace=e2e-tests-kubectl-ddqsj' May 14 11:39:02.140: INFO: stderr: "" May 14 11:39:02.140: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 14 May 11:39:00.439 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 14 May 11:39:00.439 # Server started, Redis version 3.2.12\n1:M 14 May 11:39:00.439 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 14 May 11:39:00.439 * The server is now ready to accept connections on port 6379\n" STEP: exposing RC May 14 11:39:02.141: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose rc redis-master --name=rm2 --port=1234 --target-port=6379 --namespace=e2e-tests-kubectl-ddqsj' May 14 11:39:02.287: INFO: stderr: "" May 14 11:39:02.287: INFO: stdout: "service/rm2 exposed\n" May 14 11:39:02.301: INFO: Service rm2 in namespace e2e-tests-kubectl-ddqsj found. STEP: exposing service May 14 11:39:04.308: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=e2e-tests-kubectl-ddqsj' May 14 11:39:04.469: INFO: stderr: "" May 14 11:39:04.470: INFO: stdout: "service/rm3 exposed\n" May 14 11:39:04.496: INFO: Service rm3 in namespace e2e-tests-kubectl-ddqsj found. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 14 11:39:06.503: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-ddqsj" for this suite. May 14 11:39:30.543: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 11:39:30.563: INFO: namespace: e2e-tests-kubectl-ddqsj, resource: bindings, ignored listing per whitelist May 14 11:39:30.616: INFO: namespace e2e-tests-kubectl-ddqsj deletion completed in 24.10862336s • [SLOW TEST:36.348 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl expose /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 14 11:39:30.616: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating the pod May 14 11:39:35.263: INFO: Successfully updated pod "annotationupdate93223f95-95d7-11ea-9b22-0242ac110018" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 14 11:39:39.311: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-8f7mc" for this suite. May 14 11:40:01.331: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 11:40:01.378: INFO: namespace: e2e-tests-projected-8f7mc, resource: bindings, ignored listing per whitelist May 14 11:40:01.392: INFO: namespace e2e-tests-projected-8f7mc deletion completed in 22.077205823s • [SLOW TEST:30.776 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-apps] ReplicationController should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 14 11:40:01.392: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Given a Pod with a 'name' label pod-adoption is created STEP: When a replication controller with a matching selector is created STEP: Then the orphan pod is adopted [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 14 11:40:08.535: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-replication-controller-khtjm" for this suite. May 14 11:40:30.569: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 11:40:30.592: INFO: namespace: e2e-tests-replication-controller-khtjm, resource: bindings, ignored listing per whitelist May 14 11:40:30.670: INFO: namespace e2e-tests-replication-controller-khtjm deletion completed in 22.130898307s • [SLOW TEST:29.278 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 14 11:40:30.670: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79 May 14 11:40:30.763: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 14 11:40:30.811: INFO: Waiting for terminating namespaces to be deleted... May 14 11:40:30.813: INFO: Logging pods the kubelet thinks is on node hunter-worker before test May 14 11:40:30.818: INFO: kube-proxy-szbng from kube-system started at 2020-03-15 18:23:11 +0000 UTC (1 container statuses recorded) May 14 11:40:30.818: INFO: Container kube-proxy ready: true, restart count 0 May 14 11:40:30.818: INFO: kindnet-54h7m from kube-system started at 2020-03-15 18:23:12 +0000 UTC (1 container statuses recorded) May 14 11:40:30.818: INFO: Container kindnet-cni ready: true, restart count 0 May 14 11:40:30.818: INFO: coredns-54ff9cd656-4h7lb from kube-system started at 2020-03-15 18:23:32 +0000 UTC (1 container statuses recorded) May 14 11:40:30.818: INFO: Container coredns ready: true, restart count 0 May 14 11:40:30.818: INFO: Logging pods the kubelet thinks is on node hunter-worker2 before test May 14 11:40:30.824: INFO: kindnet-mtqrs from kube-system started at 2020-03-15 18:23:12 +0000 UTC (1 container statuses recorded) May 14 11:40:30.824: INFO: Container kindnet-cni ready: true, restart count 0 May 14 11:40:30.824: INFO: coredns-54ff9cd656-8vrkk from kube-system started at 2020-03-15 18:23:32 +0000 UTC (1 container statuses recorded) May 14 11:40:30.824: INFO: Container coredns ready: true, restart count 0 May 14 11:40:30.824: INFO: kube-proxy-s52ll from kube-system started at 2020-03-15 18:23:12 +0000 UTC (1 container statuses recorded) May 14 11:40:30.824: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-b961a8e9-95d7-11ea-9b22-0242ac110018 42 STEP: Trying to relaunch the pod, now with labels. STEP: removing the label kubernetes.io/e2e-b961a8e9-95d7-11ea-9b22-0242ac110018 off the node hunter-worker2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-b961a8e9-95d7-11ea-9b22-0242ac110018 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 14 11:40:38.987: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-sched-pred-zhk54" for this suite. May 14 11:40:57.034: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 11:40:57.079: INFO: namespace: e2e-tests-sched-pred-zhk54, resource: bindings, ignored listing per whitelist May 14 11:40:57.110: INFO: namespace e2e-tests-sched-pred-zhk54 deletion completed in 18.121007804s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70 • [SLOW TEST:26.440 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22 validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 14 11:40:57.111: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name projected-secret-test-c6beb04c-95d7-11ea-9b22-0242ac110018 STEP: Creating a pod to test consume secrets May 14 11:40:57.334: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-c6c0d49d-95d7-11ea-9b22-0242ac110018" in namespace "e2e-tests-projected-wx5n5" to be "success or failure" May 14 11:40:57.355: INFO: Pod "pod-projected-secrets-c6c0d49d-95d7-11ea-9b22-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 21.402273ms May 14 11:40:59.360: INFO: Pod "pod-projected-secrets-c6c0d49d-95d7-11ea-9b22-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025833314s May 14 11:41:01.369: INFO: Pod "pod-projected-secrets-c6c0d49d-95d7-11ea-9b22-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.035325186s STEP: Saw pod success May 14 11:41:01.369: INFO: Pod "pod-projected-secrets-c6c0d49d-95d7-11ea-9b22-0242ac110018" satisfied condition "success or failure" May 14 11:41:01.371: INFO: Trying to get logs from node hunter-worker pod pod-projected-secrets-c6c0d49d-95d7-11ea-9b22-0242ac110018 container secret-volume-test: STEP: delete the pod May 14 11:41:01.635: INFO: Waiting for pod pod-projected-secrets-c6c0d49d-95d7-11ea-9b22-0242ac110018 to disappear May 14 11:41:01.723: INFO: Pod pod-projected-secrets-c6c0d49d-95d7-11ea-9b22-0242ac110018 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 14 11:41:01.723: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-wx5n5" for this suite. May 14 11:41:07.840: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 11:41:07.893: INFO: namespace: e2e-tests-projected-wx5n5, resource: bindings, ignored listing per whitelist May 14 11:41:07.905: INFO: namespace e2e-tests-projected-wx5n5 deletion completed in 6.177828331s • [SLOW TEST:10.794 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 14 11:41:07.905: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a service in the namespace STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there is no service in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 14 11:41:14.319: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-namespaces-nnk6k" for this suite. May 14 11:41:20.330: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 11:41:20.370: INFO: namespace: e2e-tests-namespaces-nnk6k, resource: bindings, ignored listing per whitelist May 14 11:41:20.395: INFO: namespace e2e-tests-namespaces-nnk6k deletion completed in 6.072434255s STEP: Destroying namespace "e2e-tests-nsdeletetest-jbpn8" for this suite. May 14 11:41:20.397: INFO: Namespace e2e-tests-nsdeletetest-jbpn8 was already deleted STEP: Destroying namespace "e2e-tests-nsdeletetest-fxb4c" for this suite. May 14 11:41:26.412: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 11:41:26.448: INFO: namespace: e2e-tests-nsdeletetest-fxb4c, resource: bindings, ignored listing per whitelist May 14 11:41:26.475: INFO: namespace e2e-tests-nsdeletetest-fxb4c deletion completed in 6.078044155s • [SLOW TEST:18.570 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 14 11:41:26.475: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-fvtt4 STEP: creating a selector STEP: Creating the service pods in kubernetes May 14 11:41:26.707: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods May 14 11:41:48.815: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.34:8080/dial?request=hostName&protocol=http&host=10.244.2.126&port=8080&tries=1'] Namespace:e2e-tests-pod-network-test-fvtt4 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 14 11:41:48.815: INFO: >>> kubeConfig: /root/.kube/config I0514 11:41:48.848062 6 log.go:172] (0xc000ce04d0) (0xc0021d7d60) Create stream I0514 11:41:48.848094 6 log.go:172] (0xc000ce04d0) (0xc0021d7d60) Stream added, broadcasting: 1 I0514 11:41:48.850003 6 log.go:172] (0xc000ce04d0) Reply frame received for 1 I0514 11:41:48.850036 6 log.go:172] (0xc000ce04d0) (0xc0020bdea0) Create stream I0514 11:41:48.850047 6 log.go:172] (0xc000ce04d0) (0xc0020bdea0) Stream added, broadcasting: 3 I0514 11:41:48.850920 6 log.go:172] (0xc000ce04d0) Reply frame received for 3 I0514 11:41:48.850962 6 log.go:172] (0xc000ce04d0) (0xc0023f0320) Create stream I0514 11:41:48.850974 6 log.go:172] (0xc000ce04d0) (0xc0023f0320) Stream added, broadcasting: 5 I0514 11:41:48.851829 6 log.go:172] (0xc000ce04d0) Reply frame received for 5 I0514 11:41:48.932660 6 log.go:172] (0xc000ce04d0) Data frame received for 3 I0514 11:41:48.932687 6 log.go:172] (0xc0020bdea0) (3) Data frame handling I0514 11:41:48.932708 6 log.go:172] (0xc0020bdea0) (3) Data frame sent I0514 11:41:48.933478 6 log.go:172] (0xc000ce04d0) Data frame received for 3 I0514 11:41:48.933509 6 log.go:172] (0xc0020bdea0) (3) Data frame handling I0514 11:41:48.933532 6 log.go:172] (0xc000ce04d0) Data frame received for 5 I0514 11:41:48.933542 6 log.go:172] (0xc0023f0320) (5) Data frame handling I0514 11:41:48.935071 6 log.go:172] (0xc000ce04d0) Data frame received for 1 I0514 11:41:48.935092 6 log.go:172] (0xc0021d7d60) (1) Data frame handling I0514 11:41:48.935105 6 log.go:172] (0xc0021d7d60) (1) Data frame sent I0514 11:41:48.935117 6 log.go:172] (0xc000ce04d0) (0xc0021d7d60) Stream removed, broadcasting: 1 I0514 11:41:48.935133 6 log.go:172] (0xc000ce04d0) Go away received I0514 11:41:48.935262 6 log.go:172] (0xc000ce04d0) (0xc0021d7d60) Stream removed, broadcasting: 1 I0514 11:41:48.935284 6 log.go:172] (0xc000ce04d0) (0xc0020bdea0) Stream removed, broadcasting: 3 I0514 11:41:48.935294 6 log.go:172] (0xc000ce04d0) (0xc0023f0320) Stream removed, broadcasting: 5 May 14 11:41:48.935: INFO: Waiting for endpoints: map[] May 14 11:41:48.937: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.34:8080/dial?request=hostName&protocol=http&host=10.244.1.33&port=8080&tries=1'] Namespace:e2e-tests-pod-network-test-fvtt4 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 14 11:41:48.937: INFO: >>> kubeConfig: /root/.kube/config I0514 11:41:48.960430 6 log.go:172] (0xc000921b80) (0xc0024541e0) Create stream I0514 11:41:48.960450 6 log.go:172] (0xc000921b80) (0xc0024541e0) Stream added, broadcasting: 1 I0514 11:41:48.963397 6 log.go:172] (0xc000921b80) Reply frame received for 1 I0514 11:41:48.963444 6 log.go:172] (0xc000921b80) (0xc001900be0) Create stream I0514 11:41:48.963460 6 log.go:172] (0xc000921b80) (0xc001900be0) Stream added, broadcasting: 3 I0514 11:41:48.964955 6 log.go:172] (0xc000921b80) Reply frame received for 3 I0514 11:41:48.964990 6 log.go:172] (0xc000921b80) (0xc002454280) Create stream I0514 11:41:48.965003 6 log.go:172] (0xc000921b80) (0xc002454280) Stream added, broadcasting: 5 I0514 11:41:48.966319 6 log.go:172] (0xc000921b80) Reply frame received for 5 I0514 11:41:49.032435 6 log.go:172] (0xc000921b80) Data frame received for 3 I0514 11:41:49.032465 6 log.go:172] (0xc001900be0) (3) Data frame handling I0514 11:41:49.032485 6 log.go:172] (0xc001900be0) (3) Data frame sent I0514 11:41:49.032593 6 log.go:172] (0xc000921b80) Data frame received for 5 I0514 11:41:49.032619 6 log.go:172] (0xc002454280) (5) Data frame handling I0514 11:41:49.032783 6 log.go:172] (0xc000921b80) Data frame received for 3 I0514 11:41:49.032794 6 log.go:172] (0xc001900be0) (3) Data frame handling I0514 11:41:49.034410 6 log.go:172] (0xc000921b80) Data frame received for 1 I0514 11:41:49.034600 6 log.go:172] (0xc0024541e0) (1) Data frame handling I0514 11:41:49.034623 6 log.go:172] (0xc0024541e0) (1) Data frame sent I0514 11:41:49.034637 6 log.go:172] (0xc000921b80) (0xc0024541e0) Stream removed, broadcasting: 1 I0514 11:41:49.034647 6 log.go:172] (0xc000921b80) Go away received I0514 11:41:49.034775 6 log.go:172] (0xc000921b80) (0xc0024541e0) Stream removed, broadcasting: 1 I0514 11:41:49.034800 6 log.go:172] (0xc000921b80) (0xc001900be0) Stream removed, broadcasting: 3 I0514 11:41:49.034815 6 log.go:172] (0xc000921b80) (0xc002454280) Stream removed, broadcasting: 5 May 14 11:41:49.034: INFO: Waiting for endpoints: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 14 11:41:49.034: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pod-network-test-fvtt4" for this suite. May 14 11:42:11.048: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 11:42:11.083: INFO: namespace: e2e-tests-pod-network-test-fvtt4, resource: bindings, ignored listing per whitelist May 14 11:42:11.170: INFO: namespace e2e-tests-pod-network-test-fvtt4 deletion completed in 22.131571405s • [SLOW TEST:44.694 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 14 11:42:11.170: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74 STEP: Creating service test in namespace e2e-tests-statefulset-nkwqh [It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Initializing watcher for selector baz=blah,foo=bar STEP: Creating stateful set ss in namespace e2e-tests-statefulset-nkwqh STEP: Waiting until all stateful set ss replicas will be running in namespace e2e-tests-statefulset-nkwqh May 14 11:42:11.335: INFO: Found 0 stateful pods, waiting for 1 May 14 11:42:21.337: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod May 14 11:42:21.339: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nkwqh ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' May 14 11:42:21.619: INFO: stderr: "I0514 11:42:21.494840 1250 log.go:172] (0xc000752160) (0xc0004ecf00) Create stream\nI0514 11:42:21.494882 1250 log.go:172] (0xc000752160) (0xc0004ecf00) Stream added, broadcasting: 1\nI0514 11:42:21.496376 1250 log.go:172] (0xc000752160) Reply frame received for 1\nI0514 11:42:21.496414 1250 log.go:172] (0xc000752160) (0xc000650000) Create stream\nI0514 11:42:21.496427 1250 log.go:172] (0xc000752160) (0xc000650000) Stream added, broadcasting: 3\nI0514 11:42:21.497021 1250 log.go:172] (0xc000752160) Reply frame received for 3\nI0514 11:42:21.497038 1250 log.go:172] (0xc000752160) (0xc0004ecfa0) Create stream\nI0514 11:42:21.497044 1250 log.go:172] (0xc000752160) (0xc0004ecfa0) Stream added, broadcasting: 5\nI0514 11:42:21.497988 1250 log.go:172] (0xc000752160) Reply frame received for 5\nI0514 11:42:21.615295 1250 log.go:172] (0xc000752160) Data frame received for 3\nI0514 11:42:21.615323 1250 log.go:172] (0xc000650000) (3) Data frame handling\nI0514 11:42:21.615334 1250 log.go:172] (0xc000650000) (3) Data frame sent\nI0514 11:42:21.615342 1250 log.go:172] (0xc000752160) Data frame received for 3\nI0514 11:42:21.615348 1250 log.go:172] (0xc000650000) (3) Data frame handling\nI0514 11:42:21.615373 1250 log.go:172] (0xc000752160) Data frame received for 5\nI0514 11:42:21.615383 1250 log.go:172] (0xc0004ecfa0) (5) Data frame handling\nI0514 11:42:21.616271 1250 log.go:172] (0xc000752160) Data frame received for 1\nI0514 11:42:21.616291 1250 log.go:172] (0xc0004ecf00) (1) Data frame handling\nI0514 11:42:21.616304 1250 log.go:172] (0xc0004ecf00) (1) Data frame sent\nI0514 11:42:21.616311 1250 log.go:172] (0xc000752160) (0xc0004ecf00) Stream removed, broadcasting: 1\nI0514 11:42:21.616420 1250 log.go:172] (0xc000752160) (0xc0004ecf00) Stream removed, broadcasting: 1\nI0514 11:42:21.616446 1250 log.go:172] (0xc000752160) (0xc000650000) Stream removed, broadcasting: 3\nI0514 11:42:21.616458 1250 log.go:172] (0xc000752160) (0xc0004ecfa0) Stream removed, broadcasting: 5\n" May 14 11:42:21.619: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" May 14 11:42:21.619: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' May 14 11:42:21.622: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true May 14 11:42:31.626: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false May 14 11:42:31.626: INFO: Waiting for statefulset status.replicas updated to 0 May 14 11:42:31.658: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999421s May 14 11:42:32.661: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.97495576s May 14 11:42:33.689: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.971761942s May 14 11:42:34.693: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.943524417s May 14 11:42:35.846: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.940365937s May 14 11:42:36.850: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.78704031s May 14 11:42:37.853: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.782998074s May 14 11:42:38.858: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.779412102s May 14 11:42:39.862: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.774809852s May 14 11:42:40.876: INFO: Verifying statefulset ss doesn't scale past 1 for another 770.563105ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace e2e-tests-statefulset-nkwqh May 14 11:42:41.880: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nkwqh ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 14 11:42:42.200: INFO: stderr: "I0514 11:42:42.014923 1273 log.go:172] (0xc00061c0b0) (0xc0005b61e0) Create stream\nI0514 11:42:42.014984 1273 log.go:172] (0xc00061c0b0) (0xc0005b61e0) Stream added, broadcasting: 1\nI0514 11:42:42.017634 1273 log.go:172] (0xc00061c0b0) Reply frame received for 1\nI0514 11:42:42.017695 1273 log.go:172] (0xc00061c0b0) (0xc0004bcc80) Create stream\nI0514 11:42:42.017738 1273 log.go:172] (0xc00061c0b0) (0xc0004bcc80) Stream added, broadcasting: 3\nI0514 11:42:42.018744 1273 log.go:172] (0xc00061c0b0) Reply frame received for 3\nI0514 11:42:42.018777 1273 log.go:172] (0xc00061c0b0) (0xc0005b6280) Create stream\nI0514 11:42:42.018789 1273 log.go:172] (0xc00061c0b0) (0xc0005b6280) Stream added, broadcasting: 5\nI0514 11:42:42.019818 1273 log.go:172] (0xc00061c0b0) Reply frame received for 5\nI0514 11:42:42.191617 1273 log.go:172] (0xc00061c0b0) Data frame received for 3\nI0514 11:42:42.191661 1273 log.go:172] (0xc0004bcc80) (3) Data frame handling\nI0514 11:42:42.191689 1273 log.go:172] (0xc0004bcc80) (3) Data frame sent\nI0514 11:42:42.191705 1273 log.go:172] (0xc00061c0b0) Data frame received for 3\nI0514 11:42:42.191718 1273 log.go:172] (0xc0004bcc80) (3) Data frame handling\nI0514 11:42:42.192115 1273 log.go:172] (0xc00061c0b0) Data frame received for 5\nI0514 11:42:42.192136 1273 log.go:172] (0xc0005b6280) (5) Data frame handling\nI0514 11:42:42.194195 1273 log.go:172] (0xc00061c0b0) Data frame received for 1\nI0514 11:42:42.194226 1273 log.go:172] (0xc0005b61e0) (1) Data frame handling\nI0514 11:42:42.194250 1273 log.go:172] (0xc0005b61e0) (1) Data frame sent\nI0514 11:42:42.194274 1273 log.go:172] (0xc00061c0b0) (0xc0005b61e0) Stream removed, broadcasting: 1\nI0514 11:42:42.194300 1273 log.go:172] (0xc00061c0b0) Go away received\nI0514 11:42:42.194510 1273 log.go:172] (0xc00061c0b0) (0xc0005b61e0) Stream removed, broadcasting: 1\nI0514 11:42:42.194532 1273 log.go:172] (0xc00061c0b0) (0xc0004bcc80) Stream removed, broadcasting: 3\nI0514 11:42:42.194545 1273 log.go:172] (0xc00061c0b0) (0xc0005b6280) Stream removed, broadcasting: 5\n" May 14 11:42:42.200: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" May 14 11:42:42.200: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' May 14 11:42:42.204: INFO: Found 1 stateful pods, waiting for 3 May 14 11:42:52.209: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true May 14 11:42:52.209: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true May 14 11:42:52.209: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Pending - Ready=false May 14 11:43:02.210: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true May 14 11:43:02.210: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true May 14 11:43:02.210: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Verifying that stateful set ss was scaled up in order STEP: Scale down will halt with unhealthy stateful pod May 14 11:43:02.214: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nkwqh ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' May 14 11:43:02.381: INFO: stderr: "I0514 11:43:02.320576 1295 log.go:172] (0xc000162840) (0xc000734640) Create stream\nI0514 11:43:02.320620 1295 log.go:172] (0xc000162840) (0xc000734640) Stream added, broadcasting: 1\nI0514 11:43:02.322402 1295 log.go:172] (0xc000162840) Reply frame received for 1\nI0514 11:43:02.322427 1295 log.go:172] (0xc000162840) (0xc0005eefa0) Create stream\nI0514 11:43:02.322435 1295 log.go:172] (0xc000162840) (0xc0005eefa0) Stream added, broadcasting: 3\nI0514 11:43:02.323136 1295 log.go:172] (0xc000162840) Reply frame received for 3\nI0514 11:43:02.323201 1295 log.go:172] (0xc000162840) (0xc00081c000) Create stream\nI0514 11:43:02.323222 1295 log.go:172] (0xc000162840) (0xc00081c000) Stream added, broadcasting: 5\nI0514 11:43:02.323872 1295 log.go:172] (0xc000162840) Reply frame received for 5\nI0514 11:43:02.375605 1295 log.go:172] (0xc000162840) Data frame received for 3\nI0514 11:43:02.375629 1295 log.go:172] (0xc0005eefa0) (3) Data frame handling\nI0514 11:43:02.375639 1295 log.go:172] (0xc0005eefa0) (3) Data frame sent\nI0514 11:43:02.375646 1295 log.go:172] (0xc000162840) Data frame received for 3\nI0514 11:43:02.375652 1295 log.go:172] (0xc0005eefa0) (3) Data frame handling\nI0514 11:43:02.375685 1295 log.go:172] (0xc000162840) Data frame received for 5\nI0514 11:43:02.375719 1295 log.go:172] (0xc00081c000) (5) Data frame handling\nI0514 11:43:02.377411 1295 log.go:172] (0xc000162840) Data frame received for 1\nI0514 11:43:02.377429 1295 log.go:172] (0xc000734640) (1) Data frame handling\nI0514 11:43:02.377458 1295 log.go:172] (0xc000734640) (1) Data frame sent\nI0514 11:43:02.377488 1295 log.go:172] (0xc000162840) (0xc000734640) Stream removed, broadcasting: 1\nI0514 11:43:02.377653 1295 log.go:172] (0xc000162840) (0xc000734640) Stream removed, broadcasting: 1\nI0514 11:43:02.377681 1295 log.go:172] (0xc000162840) (0xc0005eefa0) Stream removed, broadcasting: 3\nI0514 11:43:02.377707 1295 log.go:172] (0xc000162840) Go away received\nI0514 11:43:02.377808 1295 log.go:172] (0xc000162840) (0xc00081c000) Stream removed, broadcasting: 5\n" May 14 11:43:02.381: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" May 14 11:43:02.381: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' May 14 11:43:02.381: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nkwqh ss-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' May 14 11:43:02.773: INFO: stderr: "I0514 11:43:02.494750 1317 log.go:172] (0xc0007244d0) (0xc00073a640) Create stream\nI0514 11:43:02.494799 1317 log.go:172] (0xc0007244d0) (0xc00073a640) Stream added, broadcasting: 1\nI0514 11:43:02.496407 1317 log.go:172] (0xc0007244d0) Reply frame received for 1\nI0514 11:43:02.496437 1317 log.go:172] (0xc0007244d0) (0xc00023cc80) Create stream\nI0514 11:43:02.496448 1317 log.go:172] (0xc0007244d0) (0xc00023cc80) Stream added, broadcasting: 3\nI0514 11:43:02.497019 1317 log.go:172] (0xc0007244d0) Reply frame received for 3\nI0514 11:43:02.497041 1317 log.go:172] (0xc0007244d0) (0xc00073a6e0) Create stream\nI0514 11:43:02.497048 1317 log.go:172] (0xc0007244d0) (0xc00073a6e0) Stream added, broadcasting: 5\nI0514 11:43:02.497678 1317 log.go:172] (0xc0007244d0) Reply frame received for 5\nI0514 11:43:02.766679 1317 log.go:172] (0xc0007244d0) Data frame received for 3\nI0514 11:43:02.766785 1317 log.go:172] (0xc00023cc80) (3) Data frame handling\nI0514 11:43:02.766810 1317 log.go:172] (0xc00023cc80) (3) Data frame sent\nI0514 11:43:02.766833 1317 log.go:172] (0xc0007244d0) Data frame received for 3\nI0514 11:43:02.766854 1317 log.go:172] (0xc0007244d0) Data frame received for 5\nI0514 11:43:02.766882 1317 log.go:172] (0xc00073a6e0) (5) Data frame handling\nI0514 11:43:02.766914 1317 log.go:172] (0xc00023cc80) (3) Data frame handling\nI0514 11:43:02.768186 1317 log.go:172] (0xc0007244d0) Data frame received for 1\nI0514 11:43:02.768211 1317 log.go:172] (0xc00073a640) (1) Data frame handling\nI0514 11:43:02.768238 1317 log.go:172] (0xc00073a640) (1) Data frame sent\nI0514 11:43:02.768295 1317 log.go:172] (0xc0007244d0) (0xc00073a640) Stream removed, broadcasting: 1\nI0514 11:43:02.768323 1317 log.go:172] (0xc0007244d0) Go away received\nI0514 11:43:02.768573 1317 log.go:172] (0xc0007244d0) (0xc00073a640) Stream removed, broadcasting: 1\nI0514 11:43:02.768608 1317 log.go:172] (0xc0007244d0) (0xc00023cc80) Stream removed, broadcasting: 3\nI0514 11:43:02.768630 1317 log.go:172] (0xc0007244d0) (0xc00073a6e0) Stream removed, broadcasting: 5\n" May 14 11:43:02.774: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" May 14 11:43:02.774: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' May 14 11:43:02.774: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nkwqh ss-2 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' May 14 11:43:03.361: INFO: stderr: "I0514 11:43:03.056201 1339 log.go:172] (0xc0007d2210) (0xc0006f2640) Create stream\nI0514 11:43:03.056251 1339 log.go:172] (0xc0007d2210) (0xc0006f2640) Stream added, broadcasting: 1\nI0514 11:43:03.057911 1339 log.go:172] (0xc0007d2210) Reply frame received for 1\nI0514 11:43:03.057947 1339 log.go:172] (0xc0007d2210) (0xc00060edc0) Create stream\nI0514 11:43:03.057957 1339 log.go:172] (0xc0007d2210) (0xc00060edc0) Stream added, broadcasting: 3\nI0514 11:43:03.058752 1339 log.go:172] (0xc0007d2210) Reply frame received for 3\nI0514 11:43:03.058781 1339 log.go:172] (0xc0007d2210) (0xc0006f26e0) Create stream\nI0514 11:43:03.058794 1339 log.go:172] (0xc0007d2210) (0xc0006f26e0) Stream added, broadcasting: 5\nI0514 11:43:03.059285 1339 log.go:172] (0xc0007d2210) Reply frame received for 5\nI0514 11:43:03.354586 1339 log.go:172] (0xc0007d2210) Data frame received for 5\nI0514 11:43:03.354651 1339 log.go:172] (0xc0006f26e0) (5) Data frame handling\nI0514 11:43:03.354688 1339 log.go:172] (0xc0007d2210) Data frame received for 3\nI0514 11:43:03.354710 1339 log.go:172] (0xc00060edc0) (3) Data frame handling\nI0514 11:43:03.354732 1339 log.go:172] (0xc00060edc0) (3) Data frame sent\nI0514 11:43:03.354787 1339 log.go:172] (0xc0007d2210) Data frame received for 3\nI0514 11:43:03.354807 1339 log.go:172] (0xc00060edc0) (3) Data frame handling\nI0514 11:43:03.356969 1339 log.go:172] (0xc0007d2210) Data frame received for 1\nI0514 11:43:03.356996 1339 log.go:172] (0xc0006f2640) (1) Data frame handling\nI0514 11:43:03.357019 1339 log.go:172] (0xc0006f2640) (1) Data frame sent\nI0514 11:43:03.357036 1339 log.go:172] (0xc0007d2210) (0xc0006f2640) Stream removed, broadcasting: 1\nI0514 11:43:03.357334 1339 log.go:172] (0xc0007d2210) (0xc0006f2640) Stream removed, broadcasting: 1\nI0514 11:43:03.357358 1339 log.go:172] (0xc0007d2210) (0xc00060edc0) Stream removed, broadcasting: 3\nI0514 11:43:03.357370 1339 log.go:172] (0xc0007d2210) (0xc0006f26e0) Stream removed, broadcasting: 5\n" May 14 11:43:03.361: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" May 14 11:43:03.361: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' May 14 11:43:03.361: INFO: Waiting for statefulset status.replicas updated to 0 May 14 11:43:03.372: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 1 May 14 11:43:13.380: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false May 14 11:43:13.380: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false May 14 11:43:13.380: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false May 14 11:43:13.536: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.99999961s May 14 11:43:14.540: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.849667176s May 14 11:43:15.544: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.845228305s May 14 11:43:16.548: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.841393451s May 14 11:43:17.564: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.836613589s May 14 11:43:18.569: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.821165209s May 14 11:43:19.574: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.816095126s May 14 11:43:20.579: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.811093105s May 14 11:43:21.602: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.806484511s May 14 11:43:22.608: INFO: Verifying statefulset ss doesn't scale past 3 for another 782.84878ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacee2e-tests-statefulset-nkwqh May 14 11:43:23.614: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nkwqh ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 14 11:43:23.820: INFO: stderr: "I0514 11:43:23.732087 1362 log.go:172] (0xc000148790) (0xc00072c640) Create stream\nI0514 11:43:23.732131 1362 log.go:172] (0xc000148790) (0xc00072c640) Stream added, broadcasting: 1\nI0514 11:43:23.734075 1362 log.go:172] (0xc000148790) Reply frame received for 1\nI0514 11:43:23.734119 1362 log.go:172] (0xc000148790) (0xc0005d2d20) Create stream\nI0514 11:43:23.734152 1362 log.go:172] (0xc000148790) (0xc0005d2d20) Stream added, broadcasting: 3\nI0514 11:43:23.734940 1362 log.go:172] (0xc000148790) Reply frame received for 3\nI0514 11:43:23.734980 1362 log.go:172] (0xc000148790) (0xc000282000) Create stream\nI0514 11:43:23.734997 1362 log.go:172] (0xc000148790) (0xc000282000) Stream added, broadcasting: 5\nI0514 11:43:23.735623 1362 log.go:172] (0xc000148790) Reply frame received for 5\nI0514 11:43:23.814023 1362 log.go:172] (0xc000148790) Data frame received for 3\nI0514 11:43:23.814056 1362 log.go:172] (0xc0005d2d20) (3) Data frame handling\nI0514 11:43:23.814067 1362 log.go:172] (0xc0005d2d20) (3) Data frame sent\nI0514 11:43:23.814107 1362 log.go:172] (0xc000148790) Data frame received for 5\nI0514 11:43:23.814150 1362 log.go:172] (0xc000282000) (5) Data frame handling\nI0514 11:43:23.814197 1362 log.go:172] (0xc000148790) Data frame received for 3\nI0514 11:43:23.814223 1362 log.go:172] (0xc0005d2d20) (3) Data frame handling\nI0514 11:43:23.815763 1362 log.go:172] (0xc000148790) Data frame received for 1\nI0514 11:43:23.815786 1362 log.go:172] (0xc00072c640) (1) Data frame handling\nI0514 11:43:23.815798 1362 log.go:172] (0xc00072c640) (1) Data frame sent\nI0514 11:43:23.815810 1362 log.go:172] (0xc000148790) (0xc00072c640) Stream removed, broadcasting: 1\nI0514 11:43:23.815954 1362 log.go:172] (0xc000148790) Go away received\nI0514 11:43:23.816106 1362 log.go:172] (0xc000148790) (0xc00072c640) Stream removed, broadcasting: 1\nI0514 11:43:23.816128 1362 log.go:172] (0xc000148790) (0xc0005d2d20) Stream removed, broadcasting: 3\nI0514 11:43:23.816142 1362 log.go:172] (0xc000148790) (0xc000282000) Stream removed, broadcasting: 5\n" May 14 11:43:23.820: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" May 14 11:43:23.820: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' May 14 11:43:23.820: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nkwqh ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 14 11:43:24.018: INFO: stderr: "I0514 11:43:23.949801 1385 log.go:172] (0xc000700370) (0xc000695540) Create stream\nI0514 11:43:23.949898 1385 log.go:172] (0xc000700370) (0xc000695540) Stream added, broadcasting: 1\nI0514 11:43:23.952197 1385 log.go:172] (0xc000700370) Reply frame received for 1\nI0514 11:43:23.952274 1385 log.go:172] (0xc000700370) (0xc0007cc000) Create stream\nI0514 11:43:23.952299 1385 log.go:172] (0xc000700370) (0xc0007cc000) Stream added, broadcasting: 3\nI0514 11:43:23.953464 1385 log.go:172] (0xc000700370) Reply frame received for 3\nI0514 11:43:23.953489 1385 log.go:172] (0xc000700370) (0xc000732000) Create stream\nI0514 11:43:23.953498 1385 log.go:172] (0xc000700370) (0xc000732000) Stream added, broadcasting: 5\nI0514 11:43:23.954411 1385 log.go:172] (0xc000700370) Reply frame received for 5\nI0514 11:43:24.012922 1385 log.go:172] (0xc000700370) Data frame received for 5\nI0514 11:43:24.012970 1385 log.go:172] (0xc000732000) (5) Data frame handling\nI0514 11:43:24.012999 1385 log.go:172] (0xc000700370) Data frame received for 3\nI0514 11:43:24.013012 1385 log.go:172] (0xc0007cc000) (3) Data frame handling\nI0514 11:43:24.013021 1385 log.go:172] (0xc0007cc000) (3) Data frame sent\nI0514 11:43:24.013028 1385 log.go:172] (0xc000700370) Data frame received for 3\nI0514 11:43:24.013036 1385 log.go:172] (0xc0007cc000) (3) Data frame handling\nI0514 11:43:24.014600 1385 log.go:172] (0xc000700370) Data frame received for 1\nI0514 11:43:24.014627 1385 log.go:172] (0xc000695540) (1) Data frame handling\nI0514 11:43:24.014647 1385 log.go:172] (0xc000695540) (1) Data frame sent\nI0514 11:43:24.014661 1385 log.go:172] (0xc000700370) (0xc000695540) Stream removed, broadcasting: 1\nI0514 11:43:24.014754 1385 log.go:172] (0xc000700370) Go away received\nI0514 11:43:24.014870 1385 log.go:172] (0xc000700370) (0xc000695540) Stream removed, broadcasting: 1\nI0514 11:43:24.014893 1385 log.go:172] (0xc000700370) (0xc0007cc000) Stream removed, broadcasting: 3\nI0514 11:43:24.014904 1385 log.go:172] (0xc000700370) (0xc000732000) Stream removed, broadcasting: 5\n" May 14 11:43:24.018: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" May 14 11:43:24.018: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' May 14 11:43:24.018: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nkwqh ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 14 11:43:24.326: INFO: stderr: "I0514 11:43:24.247556 1407 log.go:172] (0xc000794160) (0xc0006da780) Create stream\nI0514 11:43:24.247654 1407 log.go:172] (0xc000794160) (0xc0006da780) Stream added, broadcasting: 1\nI0514 11:43:24.250840 1407 log.go:172] (0xc000794160) Reply frame received for 1\nI0514 11:43:24.250930 1407 log.go:172] (0xc000794160) (0xc00051cb40) Create stream\nI0514 11:43:24.250953 1407 log.go:172] (0xc000794160) (0xc00051cb40) Stream added, broadcasting: 3\nI0514 11:43:24.251755 1407 log.go:172] (0xc000794160) Reply frame received for 3\nI0514 11:43:24.251811 1407 log.go:172] (0xc000794160) (0xc000208000) Create stream\nI0514 11:43:24.251833 1407 log.go:172] (0xc000794160) (0xc000208000) Stream added, broadcasting: 5\nI0514 11:43:24.252716 1407 log.go:172] (0xc000794160) Reply frame received for 5\nI0514 11:43:24.321985 1407 log.go:172] (0xc000794160) Data frame received for 3\nI0514 11:43:24.322007 1407 log.go:172] (0xc00051cb40) (3) Data frame handling\nI0514 11:43:24.322022 1407 log.go:172] (0xc00051cb40) (3) Data frame sent\nI0514 11:43:24.322031 1407 log.go:172] (0xc000794160) Data frame received for 3\nI0514 11:43:24.322043 1407 log.go:172] (0xc00051cb40) (3) Data frame handling\nI0514 11:43:24.322114 1407 log.go:172] (0xc000794160) Data frame received for 5\nI0514 11:43:24.322154 1407 log.go:172] (0xc000208000) (5) Data frame handling\nI0514 11:43:24.323551 1407 log.go:172] (0xc000794160) Data frame received for 1\nI0514 11:43:24.323621 1407 log.go:172] (0xc0006da780) (1) Data frame handling\nI0514 11:43:24.323649 1407 log.go:172] (0xc0006da780) (1) Data frame sent\nI0514 11:43:24.323670 1407 log.go:172] (0xc000794160) (0xc0006da780) Stream removed, broadcasting: 1\nI0514 11:43:24.323689 1407 log.go:172] (0xc000794160) Go away received\nI0514 11:43:24.323945 1407 log.go:172] (0xc000794160) (0xc0006da780) Stream removed, broadcasting: 1\nI0514 11:43:24.323974 1407 log.go:172] (0xc000794160) (0xc00051cb40) Stream removed, broadcasting: 3\nI0514 11:43:24.323986 1407 log.go:172] (0xc000794160) (0xc000208000) Stream removed, broadcasting: 5\n" May 14 11:43:24.327: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" May 14 11:43:24.327: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' May 14 11:43:24.327: INFO: Scaling statefulset ss to 0 STEP: Verifying that stateful set ss was scaled down in reverse order [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85 May 14 11:43:54.341: INFO: Deleting all statefulset in ns e2e-tests-statefulset-nkwqh May 14 11:43:54.343: INFO: Scaling statefulset ss to 0 May 14 11:43:54.351: INFO: Waiting for statefulset status.replicas updated to 0 May 14 11:43:54.353: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 14 11:43:54.370: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-statefulset-nkwqh" for this suite. May 14 11:44:02.408: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 11:44:02.462: INFO: namespace: e2e-tests-statefulset-nkwqh, resource: bindings, ignored listing per whitelist May 14 11:44:02.479: INFO: namespace e2e-tests-statefulset-nkwqh deletion completed in 8.106737064s • [SLOW TEST:111.309 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 14 11:44:02.480: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with secret that has name projected-secret-test-3533c4f4-95d8-11ea-9b22-0242ac110018 STEP: Creating a pod to test consume secrets May 14 11:44:02.652: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-353579a5-95d8-11ea-9b22-0242ac110018" in namespace "e2e-tests-projected-cdxb6" to be "success or failure" May 14 11:44:02.711: INFO: Pod "pod-projected-secrets-353579a5-95d8-11ea-9b22-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 58.328374ms May 14 11:44:04.901: INFO: Pod "pod-projected-secrets-353579a5-95d8-11ea-9b22-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.249213466s May 14 11:44:06.906: INFO: Pod "pod-projected-secrets-353579a5-95d8-11ea-9b22-0242ac110018": Phase="Running", Reason="", readiness=true. Elapsed: 4.253811855s May 14 11:44:08.909: INFO: Pod "pod-projected-secrets-353579a5-95d8-11ea-9b22-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.256940789s STEP: Saw pod success May 14 11:44:08.909: INFO: Pod "pod-projected-secrets-353579a5-95d8-11ea-9b22-0242ac110018" satisfied condition "success or failure" May 14 11:44:08.911: INFO: Trying to get logs from node hunter-worker pod pod-projected-secrets-353579a5-95d8-11ea-9b22-0242ac110018 container projected-secret-volume-test: STEP: delete the pod May 14 11:44:08.962: INFO: Waiting for pod pod-projected-secrets-353579a5-95d8-11ea-9b22-0242ac110018 to disappear May 14 11:44:08.966: INFO: Pod pod-projected-secrets-353579a5-95d8-11ea-9b22-0242ac110018 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 14 11:44:08.966: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-cdxb6" for this suite. May 14 11:44:15.018: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 11:44:15.030: INFO: namespace: e2e-tests-projected-cdxb6, resource: bindings, ignored listing per whitelist May 14 11:44:15.124: INFO: namespace e2e-tests-projected-cdxb6 deletion completed in 6.156187649s • [SLOW TEST:12.644 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 14 11:44:15.124: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-map-3cb6f728-95d8-11ea-9b22-0242ac110018 STEP: Creating a pod to test consume configMaps May 14 11:44:15.244: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-3cb7a258-95d8-11ea-9b22-0242ac110018" in namespace "e2e-tests-projected-2gvs4" to be "success or failure" May 14 11:44:15.248: INFO: Pod "pod-projected-configmaps-3cb7a258-95d8-11ea-9b22-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 3.535922ms May 14 11:44:17.252: INFO: Pod "pod-projected-configmaps-3cb7a258-95d8-11ea-9b22-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007417344s May 14 11:44:19.255: INFO: Pod "pod-projected-configmaps-3cb7a258-95d8-11ea-9b22-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010906372s STEP: Saw pod success May 14 11:44:19.255: INFO: Pod "pod-projected-configmaps-3cb7a258-95d8-11ea-9b22-0242ac110018" satisfied condition "success or failure" May 14 11:44:19.258: INFO: Trying to get logs from node hunter-worker pod pod-projected-configmaps-3cb7a258-95d8-11ea-9b22-0242ac110018 container projected-configmap-volume-test: STEP: delete the pod May 14 11:44:19.299: INFO: Waiting for pod pod-projected-configmaps-3cb7a258-95d8-11ea-9b22-0242ac110018 to disappear May 14 11:44:19.339: INFO: Pod pod-projected-configmaps-3cb7a258-95d8-11ea-9b22-0242ac110018 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 14 11:44:19.339: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-2gvs4" for this suite. May 14 11:44:25.354: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 11:44:25.405: INFO: namespace: e2e-tests-projected-2gvs4, resource: bindings, ignored listing per whitelist May 14 11:44:25.423: INFO: namespace e2e-tests-projected-2gvs4 deletion completed in 6.079777717s • [SLOW TEST:10.299 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 14 11:44:25.423: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test substitution in container's command May 14 11:44:25.535: INFO: Waiting up to 5m0s for pod "var-expansion-42d84a25-95d8-11ea-9b22-0242ac110018" in namespace "e2e-tests-var-expansion-prtd7" to be "success or failure" May 14 11:44:25.578: INFO: Pod "var-expansion-42d84a25-95d8-11ea-9b22-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 43.010258ms May 14 11:44:27.962: INFO: Pod "var-expansion-42d84a25-95d8-11ea-9b22-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.426524443s May 14 11:44:29.965: INFO: Pod "var-expansion-42d84a25-95d8-11ea-9b22-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.430235323s STEP: Saw pod success May 14 11:44:29.965: INFO: Pod "var-expansion-42d84a25-95d8-11ea-9b22-0242ac110018" satisfied condition "success or failure" May 14 11:44:29.968: INFO: Trying to get logs from node hunter-worker2 pod var-expansion-42d84a25-95d8-11ea-9b22-0242ac110018 container dapi-container: STEP: delete the pod May 14 11:44:29.990: INFO: Waiting for pod var-expansion-42d84a25-95d8-11ea-9b22-0242ac110018 to disappear May 14 11:44:30.029: INFO: Pod var-expansion-42d84a25-95d8-11ea-9b22-0242ac110018 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 14 11:44:30.029: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-var-expansion-prtd7" for this suite. May 14 11:44:36.067: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 11:44:36.089: INFO: namespace: e2e-tests-var-expansion-prtd7, resource: bindings, ignored listing per whitelist May 14 11:44:36.133: INFO: namespace e2e-tests-var-expansion-prtd7 deletion completed in 6.100868412s • [SLOW TEST:10.710 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 14 11:44:36.133: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a new configmap STEP: modifying the configmap once STEP: modifying the configmap a second time STEP: deleting the configmap STEP: creating a watch on configmaps from the resource version returned by the first update STEP: Expecting to observe notifications for all changes to the configmap after the first update May 14 11:44:36.503: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:e2e-tests-watch-cwmpw,SelfLink:/api/v1/namespaces/e2e-tests-watch-cwmpw/configmaps/e2e-watch-test-resource-version,UID:495d826b-95d8-11ea-99e8-0242ac110002,ResourceVersion:10525383,Generation:0,CreationTimestamp:2020-05-14 11:44:36 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} May 14 11:44:36.503: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:e2e-tests-watch-cwmpw,SelfLink:/api/v1/namespaces/e2e-tests-watch-cwmpw/configmaps/e2e-watch-test-resource-version,UID:495d826b-95d8-11ea-99e8-0242ac110002,ResourceVersion:10525384,Generation:0,CreationTimestamp:2020-05-14 11:44:36 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 14 11:44:36.503: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-watch-cwmpw" for this suite. May 14 11:44:42.516: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 11:44:42.586: INFO: namespace: e2e-tests-watch-cwmpw, resource: bindings, ignored listing per whitelist May 14 11:44:42.598: INFO: namespace e2e-tests-watch-cwmpw deletion completed in 6.092276333s • [SLOW TEST:6.465 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 14 11:44:42.599: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod May 14 11:44:47.278: INFO: Successfully updated pod "pod-update-4d1a5fc6-95d8-11ea-9b22-0242ac110018" STEP: verifying the updated pod is in kubernetes May 14 11:44:47.296: INFO: Pod update OK [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 14 11:44:47.296: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-fdt78" for this suite. May 14 11:45:09.312: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 11:45:09.333: INFO: namespace: e2e-tests-pods-fdt78, resource: bindings, ignored listing per whitelist May 14 11:45:09.387: INFO: namespace e2e-tests-pods-fdt78 deletion completed in 22.087811956s • [SLOW TEST:26.788 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 14 11:45:09.387: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Cleaning up the secret STEP: Cleaning up the configmap STEP: Cleaning up the pod [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 14 11:45:13.761: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-wrapper-tq6wm" for this suite. May 14 11:45:19.810: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 11:45:19.848: INFO: namespace: e2e-tests-emptydir-wrapper-tq6wm, resource: bindings, ignored listing per whitelist May 14 11:45:19.920: INFO: namespace e2e-tests-emptydir-wrapper-tq6wm deletion completed in 6.141273484s • [SLOW TEST:10.533 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 14 11:45:19.920: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79 May 14 11:45:20.049: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 14 11:45:20.063: INFO: Waiting for terminating namespaces to be deleted... May 14 11:45:20.065: INFO: Logging pods the kubelet thinks is on node hunter-worker before test May 14 11:45:20.071: INFO: kube-proxy-szbng from kube-system started at 2020-03-15 18:23:11 +0000 UTC (1 container statuses recorded) May 14 11:45:20.071: INFO: Container kube-proxy ready: true, restart count 0 May 14 11:45:20.071: INFO: kindnet-54h7m from kube-system started at 2020-03-15 18:23:12 +0000 UTC (1 container statuses recorded) May 14 11:45:20.071: INFO: Container kindnet-cni ready: true, restart count 0 May 14 11:45:20.071: INFO: coredns-54ff9cd656-4h7lb from kube-system started at 2020-03-15 18:23:32 +0000 UTC (1 container statuses recorded) May 14 11:45:20.071: INFO: Container coredns ready: true, restart count 0 May 14 11:45:20.071: INFO: Logging pods the kubelet thinks is on node hunter-worker2 before test May 14 11:45:20.075: INFO: coredns-54ff9cd656-8vrkk from kube-system started at 2020-03-15 18:23:32 +0000 UTC (1 container statuses recorded) May 14 11:45:20.075: INFO: Container coredns ready: true, restart count 0 May 14 11:45:20.075: INFO: kindnet-mtqrs from kube-system started at 2020-03-15 18:23:12 +0000 UTC (1 container statuses recorded) May 14 11:45:20.075: INFO: Container kindnet-cni ready: true, restart count 0 May 14 11:45:20.075: INFO: kube-proxy-s52ll from kube-system started at 2020-03-15 18:23:12 +0000 UTC (1 container statuses recorded) May 14 11:45:20.075: INFO: Container kube-proxy ready: true, restart count 0 [It] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: verifying the node has the label node hunter-worker STEP: verifying the node has the label node hunter-worker2 May 14 11:45:20.144: INFO: Pod coredns-54ff9cd656-4h7lb requesting resource cpu=100m on Node hunter-worker May 14 11:45:20.145: INFO: Pod coredns-54ff9cd656-8vrkk requesting resource cpu=100m on Node hunter-worker2 May 14 11:45:20.145: INFO: Pod kindnet-54h7m requesting resource cpu=100m on Node hunter-worker May 14 11:45:20.145: INFO: Pod kindnet-mtqrs requesting resource cpu=100m on Node hunter-worker2 May 14 11:45:20.145: INFO: Pod kube-proxy-s52ll requesting resource cpu=0m on Node hunter-worker2 May 14 11:45:20.145: INFO: Pod kube-proxy-szbng requesting resource cpu=0m on Node hunter-worker STEP: Starting Pods to consume most of the cluster CPU. STEP: Creating another pod that requires unavailable amount of CPU. STEP: Considering event: Type = [Normal], Name = [filler-pod-6367eb5d-95d8-11ea-9b22-0242ac110018.160ee27738e5e25a], Reason = [Scheduled], Message = [Successfully assigned e2e-tests-sched-pred-fch9b/filler-pod-6367eb5d-95d8-11ea-9b22-0242ac110018 to hunter-worker] STEP: Considering event: Type = [Normal], Name = [filler-pod-6367eb5d-95d8-11ea-9b22-0242ac110018.160ee2778850eaf6], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-6367eb5d-95d8-11ea-9b22-0242ac110018.160ee277fd47f307], Reason = [Created], Message = [Created container] STEP: Considering event: Type = [Normal], Name = [filler-pod-6367eb5d-95d8-11ea-9b22-0242ac110018.160ee2780ecaaf6c], Reason = [Started], Message = [Started container] STEP: Considering event: Type = [Normal], Name = [filler-pod-636d7bce-95d8-11ea-9b22-0242ac110018.160ee2773ab98c85], Reason = [Scheduled], Message = [Successfully assigned e2e-tests-sched-pred-fch9b/filler-pod-636d7bce-95d8-11ea-9b22-0242ac110018 to hunter-worker2] STEP: Considering event: Type = [Normal], Name = [filler-pod-636d7bce-95d8-11ea-9b22-0242ac110018.160ee277c26447f1], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-636d7bce-95d8-11ea-9b22-0242ac110018.160ee2781a99daee], Reason = [Created], Message = [Created container] STEP: Considering event: Type = [Normal], Name = [filler-pod-636d7bce-95d8-11ea-9b22-0242ac110018.160ee2782a102279], Reason = [Started], Message = [Started container] STEP: Considering event: Type = [Warning], Name = [additional-pod.160ee278a14099b5], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taints that the pod didn't tolerate, 2 Insufficient cpu.] STEP: removing the label node off the node hunter-worker STEP: verifying the node doesn't have the label node STEP: removing the label node off the node hunter-worker2 STEP: verifying the node doesn't have the label node [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 14 11:45:27.471: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-sched-pred-fch9b" for this suite. May 14 11:45:33.546: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 11:45:33.587: INFO: namespace: e2e-tests-sched-pred-fch9b, resource: bindings, ignored listing per whitelist May 14 11:45:33.649: INFO: namespace e2e-tests-sched-pred-fch9b deletion completed in 6.163255866s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70 • [SLOW TEST:13.729 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22 validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl api-versions should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 14 11:45:33.649: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: validating api versions May 14 11:45:33.923: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config api-versions' May 14 11:45:34.175: INFO: stderr: "" May 14 11:45:34.176: INFO: stdout: "admissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\napps/v1beta1\napps/v1beta2\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 14 11:45:34.176: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-vdlqr" for this suite. May 14 11:45:40.238: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 11:45:40.250: INFO: namespace: e2e-tests-kubectl-vdlqr, resource: bindings, ignored listing per whitelist May 14 11:45:40.312: INFO: namespace e2e-tests-kubectl-vdlqr deletion completed in 6.133201064s • [SLOW TEST:6.663 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl api-versions /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 14 11:45:40.312: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod May 14 11:45:44.989: INFO: Successfully updated pod "pod-update-activedeadlineseconds-6f7b0dbc-95d8-11ea-9b22-0242ac110018" May 14 11:45:44.989: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-6f7b0dbc-95d8-11ea-9b22-0242ac110018" in namespace "e2e-tests-pods-vkj7w" to be "terminated due to deadline exceeded" May 14 11:45:45.018: INFO: Pod "pod-update-activedeadlineseconds-6f7b0dbc-95d8-11ea-9b22-0242ac110018": Phase="Running", Reason="", readiness=true. Elapsed: 29.004043ms May 14 11:45:47.029: INFO: Pod "pod-update-activedeadlineseconds-6f7b0dbc-95d8-11ea-9b22-0242ac110018": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.039345221s May 14 11:45:47.029: INFO: Pod "pod-update-activedeadlineseconds-6f7b0dbc-95d8-11ea-9b22-0242ac110018" satisfied condition "terminated due to deadline exceeded" [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 14 11:45:47.029: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-vkj7w" for this suite. May 14 11:45:53.250: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 11:45:53.277: INFO: namespace: e2e-tests-pods-vkj7w, resource: bindings, ignored listing per whitelist May 14 11:45:53.323: INFO: namespace e2e-tests-pods-vkj7w deletion completed in 6.291461475s • [SLOW TEST:13.011 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [k8s.io] Pods should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 14 11:45:53.323: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating pod May 14 11:45:57.462: INFO: Pod pod-hostip-773a5133-95d8-11ea-9b22-0242ac110018 has hostIP: 172.17.0.4 [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 14 11:45:57.462: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-jqc5k" for this suite. May 14 11:46:19.544: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 11:46:19.610: INFO: namespace: e2e-tests-pods-jqc5k, resource: bindings, ignored listing per whitelist May 14 11:46:19.639: INFO: namespace e2e-tests-pods-jqc5k deletion completed in 22.174982827s • [SLOW TEST:26.316 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 14 11:46:19.640: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:85 [It] should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 14 11:46:19.779: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-services-mqfc5" for this suite. May 14 11:46:25.951: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 11:46:25.997: INFO: namespace: e2e-tests-services-mqfc5, resource: bindings, ignored listing per whitelist May 14 11:46:26.023: INFO: namespace e2e-tests-services-mqfc5 deletion completed in 6.23984265s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:90 • [SLOW TEST:6.384 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 14 11:46:26.023: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating the pod May 14 11:46:32.903: INFO: Successfully updated pod "labelsupdate8ad0ac54-95d8-11ea-9b22-0242ac110018" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 14 11:46:34.963: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-qd5fr" for this suite. May 14 11:46:59.016: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 11:46:59.039: INFO: namespace: e2e-tests-projected-qd5fr, resource: bindings, ignored listing per whitelist May 14 11:46:59.090: INFO: namespace e2e-tests-projected-qd5fr deletion completed in 24.123703679s • [SLOW TEST:33.066 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 14 11:46:59.090: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test env composition May 14 11:46:59.225: INFO: Waiting up to 5m0s for pod "var-expansion-9e73a810-95d8-11ea-9b22-0242ac110018" in namespace "e2e-tests-var-expansion-227wh" to be "success or failure" May 14 11:46:59.242: INFO: Pod "var-expansion-9e73a810-95d8-11ea-9b22-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 16.253558ms May 14 11:47:01.245: INFO: Pod "var-expansion-9e73a810-95d8-11ea-9b22-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019643943s May 14 11:47:03.250: INFO: Pod "var-expansion-9e73a810-95d8-11ea-9b22-0242ac110018": Phase="Running", Reason="", readiness=true. Elapsed: 4.024153612s May 14 11:47:05.253: INFO: Pod "var-expansion-9e73a810-95d8-11ea-9b22-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.026978452s STEP: Saw pod success May 14 11:47:05.253: INFO: Pod "var-expansion-9e73a810-95d8-11ea-9b22-0242ac110018" satisfied condition "success or failure" May 14 11:47:05.255: INFO: Trying to get logs from node hunter-worker pod var-expansion-9e73a810-95d8-11ea-9b22-0242ac110018 container dapi-container: STEP: delete the pod May 14 11:47:05.274: INFO: Waiting for pod var-expansion-9e73a810-95d8-11ea-9b22-0242ac110018 to disappear May 14 11:47:05.279: INFO: Pod var-expansion-9e73a810-95d8-11ea-9b22-0242ac110018 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 14 11:47:05.279: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-var-expansion-227wh" for this suite. May 14 11:47:11.306: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 11:47:11.372: INFO: namespace: e2e-tests-var-expansion-227wh, resource: bindings, ignored listing per whitelist May 14 11:47:11.375: INFO: namespace e2e-tests-var-expansion-227wh deletion completed in 6.093533033s • [SLOW TEST:12.285 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 14 11:47:11.375: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward api env vars May 14 11:47:11.627: INFO: Waiting up to 5m0s for pod "downward-api-a5d3a31f-95d8-11ea-9b22-0242ac110018" in namespace "e2e-tests-downward-api-mn5q7" to be "success or failure" May 14 11:47:11.634: INFO: Pod "downward-api-a5d3a31f-95d8-11ea-9b22-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 6.987237ms May 14 11:47:13.638: INFO: Pod "downward-api-a5d3a31f-95d8-11ea-9b22-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011766871s May 14 11:47:15.643: INFO: Pod "downward-api-a5d3a31f-95d8-11ea-9b22-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 4.016351871s May 14 11:47:17.648: INFO: Pod "downward-api-a5d3a31f-95d8-11ea-9b22-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.021288897s STEP: Saw pod success May 14 11:47:17.648: INFO: Pod "downward-api-a5d3a31f-95d8-11ea-9b22-0242ac110018" satisfied condition "success or failure" May 14 11:47:17.651: INFO: Trying to get logs from node hunter-worker pod downward-api-a5d3a31f-95d8-11ea-9b22-0242ac110018 container dapi-container: STEP: delete the pod May 14 11:47:17.690: INFO: Waiting for pod downward-api-a5d3a31f-95d8-11ea-9b22-0242ac110018 to disappear May 14 11:47:17.703: INFO: Pod downward-api-a5d3a31f-95d8-11ea-9b22-0242ac110018 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 14 11:47:17.703: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-mn5q7" for this suite. May 14 11:47:23.929: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 11:47:23.964: INFO: namespace: e2e-tests-downward-api-mn5q7, resource: bindings, ignored listing per whitelist May 14 11:47:24.005: INFO: namespace e2e-tests-downward-api-mn5q7 deletion completed in 6.283321979s • [SLOW TEST:12.630 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38 should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 14 11:47:24.006: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin May 14 11:47:24.125: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ad4bd09b-95d8-11ea-9b22-0242ac110018" in namespace "e2e-tests-downward-api-6rtgk" to be "success or failure" May 14 11:47:24.129: INFO: Pod "downwardapi-volume-ad4bd09b-95d8-11ea-9b22-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 4.452336ms May 14 11:47:26.133: INFO: Pod "downwardapi-volume-ad4bd09b-95d8-11ea-9b22-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008517634s May 14 11:47:28.137: INFO: Pod "downwardapi-volume-ad4bd09b-95d8-11ea-9b22-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011801046s STEP: Saw pod success May 14 11:47:28.137: INFO: Pod "downwardapi-volume-ad4bd09b-95d8-11ea-9b22-0242ac110018" satisfied condition "success or failure" May 14 11:47:28.139: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-ad4bd09b-95d8-11ea-9b22-0242ac110018 container client-container: STEP: delete the pod May 14 11:47:28.172: INFO: Waiting for pod downwardapi-volume-ad4bd09b-95d8-11ea-9b22-0242ac110018 to disappear May 14 11:47:28.176: INFO: Pod downwardapi-volume-ad4bd09b-95d8-11ea-9b22-0242ac110018 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 14 11:47:28.176: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-6rtgk" for this suite. May 14 11:47:34.220: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 11:47:34.267: INFO: namespace: e2e-tests-downward-api-6rtgk, resource: bindings, ignored listing per whitelist May 14 11:47:34.273: INFO: namespace e2e-tests-downward-api-6rtgk deletion completed in 6.092930469s • [SLOW TEST:10.267 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 14 11:47:34.273: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74 STEP: Creating service test in namespace e2e-tests-statefulset-drnx6 [It] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a new StaefulSet May 14 11:47:34.459: INFO: Found 0 stateful pods, waiting for 3 May 14 11:47:44.476: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 14 11:47:44.476: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 14 11:47:44.476: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false May 14 11:47:54.464: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 14 11:47:54.464: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 14 11:47:54.464: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Updating stateful set template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine May 14 11:47:54.515: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Not applying an update when the partition is greater than the number of replicas STEP: Performing a canary update May 14 11:48:04.588: INFO: Updating stateful set ss2 May 14 11:48:04.602: INFO: Waiting for Pod e2e-tests-statefulset-drnx6/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c STEP: Restoring Pods to the correct revision when they are deleted May 14 11:48:14.690: INFO: Found 2 stateful pods, waiting for 3 May 14 11:48:24.694: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 14 11:48:24.694: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 14 11:48:24.694: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Performing a phased rolling update May 14 11:48:24.720: INFO: Updating stateful set ss2 May 14 11:48:24.737: INFO: Waiting for Pod e2e-tests-statefulset-drnx6/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c May 14 11:48:34.757: INFO: Updating stateful set ss2 May 14 11:48:34.771: INFO: Waiting for StatefulSet e2e-tests-statefulset-drnx6/ss2 to complete update May 14 11:48:34.771: INFO: Waiting for Pod e2e-tests-statefulset-drnx6/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85 May 14 11:48:44.792: INFO: Deleting all statefulset in ns e2e-tests-statefulset-drnx6 May 14 11:48:44.794: INFO: Scaling statefulset ss2 to 0 May 14 11:49:04.811: INFO: Waiting for statefulset status.replicas updated to 0 May 14 11:49:04.814: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 14 11:49:04.833: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-statefulset-drnx6" for this suite. May 14 11:49:12.868: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 11:49:12.912: INFO: namespace: e2e-tests-statefulset-drnx6, resource: bindings, ignored listing per whitelist May 14 11:49:12.954: INFO: namespace e2e-tests-statefulset-drnx6 deletion completed in 8.118129052s • [SLOW TEST:98.681 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run pod should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 14 11:49:12.955: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1527 [It] should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine May 14 11:49:13.056: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=e2e-tests-kubectl-4pmkl' May 14 11:49:15.310: INFO: stderr: "" May 14 11:49:15.310: INFO: stdout: "pod/e2e-test-nginx-pod created\n" STEP: verifying the pod e2e-test-nginx-pod was created [AfterEach] [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1532 May 14 11:49:15.318: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=e2e-tests-kubectl-4pmkl' May 14 11:49:21.776: INFO: stderr: "" May 14 11:49:21.776: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 14 11:49:21.776: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-4pmkl" for this suite. May 14 11:49:27.794: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 11:49:27.819: INFO: namespace: e2e-tests-kubectl-4pmkl, resource: bindings, ignored listing per whitelist May 14 11:49:27.882: INFO: namespace e2e-tests-kubectl-4pmkl deletion completed in 6.097244703s • [SLOW TEST:14.927 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 14 11:49:27.882: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-map-f720ed07-95d8-11ea-9b22-0242ac110018 STEP: Creating a pod to test consume configMaps May 14 11:49:28.005: INFO: Waiting up to 5m0s for pod "pod-configmaps-f7234743-95d8-11ea-9b22-0242ac110018" in namespace "e2e-tests-configmap-zjrw7" to be "success or failure" May 14 11:49:28.020: INFO: Pod "pod-configmaps-f7234743-95d8-11ea-9b22-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 15.047995ms May 14 11:49:30.490: INFO: Pod "pod-configmaps-f7234743-95d8-11ea-9b22-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.48448355s May 14 11:49:32.495: INFO: Pod "pod-configmaps-f7234743-95d8-11ea-9b22-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.489390127s STEP: Saw pod success May 14 11:49:32.495: INFO: Pod "pod-configmaps-f7234743-95d8-11ea-9b22-0242ac110018" satisfied condition "success or failure" May 14 11:49:32.498: INFO: Trying to get logs from node hunter-worker pod pod-configmaps-f7234743-95d8-11ea-9b22-0242ac110018 container configmap-volume-test: STEP: delete the pod May 14 11:49:32.532: INFO: Waiting for pod pod-configmaps-f7234743-95d8-11ea-9b22-0242ac110018 to disappear May 14 11:49:32.536: INFO: Pod pod-configmaps-f7234743-95d8-11ea-9b22-0242ac110018 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 14 11:49:32.536: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-zjrw7" for this suite. May 14 11:49:38.551: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 11:49:38.627: INFO: namespace: e2e-tests-configmap-zjrw7, resource: bindings, ignored listing per whitelist May 14 11:49:38.656: INFO: namespace e2e-tests-configmap-zjrw7 deletion completed in 6.116522709s • [SLOW TEST:10.773 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 14 11:49:38.656: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-fd87e6ac-95d8-11ea-9b22-0242ac110018 STEP: Creating a pod to test consume secrets May 14 11:49:38.752: INFO: Waiting up to 5m0s for pod "pod-secrets-fd88a11f-95d8-11ea-9b22-0242ac110018" in namespace "e2e-tests-secrets-dntpj" to be "success or failure" May 14 11:49:38.768: INFO: Pod "pod-secrets-fd88a11f-95d8-11ea-9b22-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 15.688203ms May 14 11:49:40.774: INFO: Pod "pod-secrets-fd88a11f-95d8-11ea-9b22-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021381408s May 14 11:49:42.778: INFO: Pod "pod-secrets-fd88a11f-95d8-11ea-9b22-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.025530508s STEP: Saw pod success May 14 11:49:42.778: INFO: Pod "pod-secrets-fd88a11f-95d8-11ea-9b22-0242ac110018" satisfied condition "success or failure" May 14 11:49:42.781: INFO: Trying to get logs from node hunter-worker2 pod pod-secrets-fd88a11f-95d8-11ea-9b22-0242ac110018 container secret-volume-test: STEP: delete the pod May 14 11:49:42.817: INFO: Waiting for pod pod-secrets-fd88a11f-95d8-11ea-9b22-0242ac110018 to disappear May 14 11:49:42.890: INFO: Pod pod-secrets-fd88a11f-95d8-11ea-9b22-0242ac110018 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 14 11:49:42.890: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-dntpj" for this suite. May 14 11:49:48.938: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 11:49:48.965: INFO: namespace: e2e-tests-secrets-dntpj, resource: bindings, ignored listing per whitelist May 14 11:49:48.997: INFO: namespace e2e-tests-secrets-dntpj deletion completed in 6.102950688s • [SLOW TEST:10.341 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 14 11:49:48.997: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with secret that has name projected-secret-test-03c2ebea-95d9-11ea-9b22-0242ac110018 STEP: Creating a pod to test consume secrets May 14 11:49:49.249: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-03c56e27-95d9-11ea-9b22-0242ac110018" in namespace "e2e-tests-projected-fck46" to be "success or failure" May 14 11:49:49.252: INFO: Pod "pod-projected-secrets-03c56e27-95d9-11ea-9b22-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.398774ms May 14 11:49:51.280: INFO: Pod "pod-projected-secrets-03c56e27-95d9-11ea-9b22-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030612675s May 14 11:49:53.284: INFO: Pod "pod-projected-secrets-03c56e27-95d9-11ea-9b22-0242ac110018": Phase="Running", Reason="", readiness=true. Elapsed: 4.034788145s May 14 11:49:55.288: INFO: Pod "pod-projected-secrets-03c56e27-95d9-11ea-9b22-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.038800231s STEP: Saw pod success May 14 11:49:55.288: INFO: Pod "pod-projected-secrets-03c56e27-95d9-11ea-9b22-0242ac110018" satisfied condition "success or failure" May 14 11:49:55.291: INFO: Trying to get logs from node hunter-worker2 pod pod-projected-secrets-03c56e27-95d9-11ea-9b22-0242ac110018 container projected-secret-volume-test: STEP: delete the pod May 14 11:49:55.330: INFO: Waiting for pod pod-projected-secrets-03c56e27-95d9-11ea-9b22-0242ac110018 to disappear May 14 11:49:55.346: INFO: Pod pod-projected-secrets-03c56e27-95d9-11ea-9b22-0242ac110018 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 14 11:49:55.346: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-fck46" for this suite. May 14 11:50:01.429: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 11:50:01.475: INFO: namespace: e2e-tests-projected-fck46, resource: bindings, ignored listing per whitelist May 14 11:50:01.501: INFO: namespace e2e-tests-projected-fck46 deletion completed in 6.151230531s • [SLOW TEST:12.504 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 14 11:50:01.502: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:295 [It] should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a replication controller May 14 11:50:01.978: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-274hj' May 14 11:50:02.384: INFO: stderr: "" May 14 11:50:02.385: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. May 14 11:50:02.385: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-274hj' May 14 11:50:02.541: INFO: stderr: "" May 14 11:50:02.541: INFO: stdout: "update-demo-nautilus-7v6sr update-demo-nautilus-sr6rp " May 14 11:50:02.541: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-7v6sr -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-274hj' May 14 11:50:02.699: INFO: stderr: "" May 14 11:50:02.699: INFO: stdout: "" May 14 11:50:02.699: INFO: update-demo-nautilus-7v6sr is created but not running May 14 11:50:07.699: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-274hj' May 14 11:50:07.800: INFO: stderr: "" May 14 11:50:07.800: INFO: stdout: "update-demo-nautilus-7v6sr update-demo-nautilus-sr6rp " May 14 11:50:07.800: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-7v6sr -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-274hj' May 14 11:50:07.904: INFO: stderr: "" May 14 11:50:07.904: INFO: stdout: "true" May 14 11:50:07.904: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-7v6sr -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-274hj' May 14 11:50:08.000: INFO: stderr: "" May 14 11:50:08.001: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 14 11:50:08.001: INFO: validating pod update-demo-nautilus-7v6sr May 14 11:50:08.005: INFO: got data: { "image": "nautilus.jpg" } May 14 11:50:08.005: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 14 11:50:08.005: INFO: update-demo-nautilus-7v6sr is verified up and running May 14 11:50:08.005: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-sr6rp -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-274hj' May 14 11:50:08.101: INFO: stderr: "" May 14 11:50:08.101: INFO: stdout: "true" May 14 11:50:08.101: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-sr6rp -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-274hj' May 14 11:50:08.211: INFO: stderr: "" May 14 11:50:08.211: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 14 11:50:08.211: INFO: validating pod update-demo-nautilus-sr6rp May 14 11:50:08.231: INFO: got data: { "image": "nautilus.jpg" } May 14 11:50:08.231: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 14 11:50:08.231: INFO: update-demo-nautilus-sr6rp is verified up and running STEP: scaling down the replication controller May 14 11:50:08.234: INFO: scanned /root for discovery docs: May 14 11:50:08.234: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=e2e-tests-kubectl-274hj' May 14 11:50:09.365: INFO: stderr: "" May 14 11:50:09.365: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. May 14 11:50:09.365: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-274hj' May 14 11:50:09.494: INFO: stderr: "" May 14 11:50:09.494: INFO: stdout: "update-demo-nautilus-7v6sr update-demo-nautilus-sr6rp " STEP: Replicas for name=update-demo: expected=1 actual=2 May 14 11:50:14.495: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-274hj' May 14 11:50:14.610: INFO: stderr: "" May 14 11:50:14.610: INFO: stdout: "update-demo-nautilus-7v6sr update-demo-nautilus-sr6rp " STEP: Replicas for name=update-demo: expected=1 actual=2 May 14 11:50:19.610: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-274hj' May 14 11:50:19.727: INFO: stderr: "" May 14 11:50:19.727: INFO: stdout: "update-demo-nautilus-7v6sr update-demo-nautilus-sr6rp " STEP: Replicas for name=update-demo: expected=1 actual=2 May 14 11:50:24.727: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-274hj' May 14 11:50:24.819: INFO: stderr: "" May 14 11:50:24.819: INFO: stdout: "update-demo-nautilus-7v6sr " May 14 11:50:24.819: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-7v6sr -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-274hj' May 14 11:50:24.904: INFO: stderr: "" May 14 11:50:24.904: INFO: stdout: "true" May 14 11:50:24.904: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-7v6sr -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-274hj' May 14 11:50:25.000: INFO: stderr: "" May 14 11:50:25.000: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 14 11:50:25.000: INFO: validating pod update-demo-nautilus-7v6sr May 14 11:50:25.003: INFO: got data: { "image": "nautilus.jpg" } May 14 11:50:25.003: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 14 11:50:25.003: INFO: update-demo-nautilus-7v6sr is verified up and running STEP: scaling up the replication controller May 14 11:50:25.004: INFO: scanned /root for discovery docs: May 14 11:50:25.004: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=e2e-tests-kubectl-274hj' May 14 11:50:26.131: INFO: stderr: "" May 14 11:50:26.132: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. May 14 11:50:26.132: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-274hj' May 14 11:50:26.229: INFO: stderr: "" May 14 11:50:26.229: INFO: stdout: "update-demo-nautilus-7v6sr update-demo-nautilus-j7qtg " May 14 11:50:26.229: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-7v6sr -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-274hj' May 14 11:50:26.325: INFO: stderr: "" May 14 11:50:26.325: INFO: stdout: "true" May 14 11:50:26.325: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-7v6sr -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-274hj' May 14 11:50:26.414: INFO: stderr: "" May 14 11:50:26.414: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 14 11:50:26.414: INFO: validating pod update-demo-nautilus-7v6sr May 14 11:50:26.416: INFO: got data: { "image": "nautilus.jpg" } May 14 11:50:26.416: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 14 11:50:26.416: INFO: update-demo-nautilus-7v6sr is verified up and running May 14 11:50:26.416: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-j7qtg -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-274hj' May 14 11:50:26.522: INFO: stderr: "" May 14 11:50:26.522: INFO: stdout: "" May 14 11:50:26.522: INFO: update-demo-nautilus-j7qtg is created but not running May 14 11:50:31.523: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-274hj' May 14 11:50:31.631: INFO: stderr: "" May 14 11:50:31.631: INFO: stdout: "update-demo-nautilus-7v6sr update-demo-nautilus-j7qtg " May 14 11:50:31.631: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-7v6sr -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-274hj' May 14 11:50:31.721: INFO: stderr: "" May 14 11:50:31.721: INFO: stdout: "true" May 14 11:50:31.721: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-7v6sr -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-274hj' May 14 11:50:31.805: INFO: stderr: "" May 14 11:50:31.805: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 14 11:50:31.805: INFO: validating pod update-demo-nautilus-7v6sr May 14 11:50:31.808: INFO: got data: { "image": "nautilus.jpg" } May 14 11:50:31.808: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 14 11:50:31.808: INFO: update-demo-nautilus-7v6sr is verified up and running May 14 11:50:31.808: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-j7qtg -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-274hj' May 14 11:50:31.913: INFO: stderr: "" May 14 11:50:31.913: INFO: stdout: "true" May 14 11:50:31.913: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-j7qtg -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-274hj' May 14 11:50:32.014: INFO: stderr: "" May 14 11:50:32.014: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 14 11:50:32.014: INFO: validating pod update-demo-nautilus-j7qtg May 14 11:50:32.017: INFO: got data: { "image": "nautilus.jpg" } May 14 11:50:32.017: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 14 11:50:32.017: INFO: update-demo-nautilus-j7qtg is verified up and running STEP: using delete to clean up resources May 14 11:50:32.017: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-274hj' May 14 11:50:32.117: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 14 11:50:32.117: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" May 14 11:50:32.117: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=e2e-tests-kubectl-274hj' May 14 11:50:32.232: INFO: stderr: "No resources found.\n" May 14 11:50:32.232: INFO: stdout: "" May 14 11:50:32.232: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=e2e-tests-kubectl-274hj -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 14 11:50:32.521: INFO: stderr: "" May 14 11:50:32.521: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 14 11:50:32.521: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-274hj" for this suite. May 14 11:50:56.566: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 11:50:56.633: INFO: namespace: e2e-tests-kubectl-274hj, resource: bindings, ignored listing per whitelist May 14 11:50:56.644: INFO: namespace e2e-tests-kubectl-274hj deletion completed in 24.11847723s • [SLOW TEST:55.143 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl replace should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 14 11:50:56.644: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1563 [It] should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine May 14 11:50:56.747: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --labels=run=e2e-test-nginx-pod --namespace=e2e-tests-kubectl-mf5dw' May 14 11:50:56.859: INFO: stderr: "" May 14 11:50:56.859: INFO: stdout: "pod/e2e-test-nginx-pod created\n" STEP: verifying the pod e2e-test-nginx-pod is running STEP: verifying the pod e2e-test-nginx-pod was created May 14 11:51:01.909: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod e2e-test-nginx-pod --namespace=e2e-tests-kubectl-mf5dw -o json' May 14 11:51:02.008: INFO: stderr: "" May 14 11:51:02.008: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"creationTimestamp\": \"2020-05-14T11:50:56Z\",\n \"labels\": {\n \"run\": \"e2e-test-nginx-pod\"\n },\n \"name\": \"e2e-test-nginx-pod\",\n \"namespace\": \"e2e-tests-kubectl-mf5dw\",\n \"resourceVersion\": \"10526849\",\n \"selfLink\": \"/api/v1/namespaces/e2e-tests-kubectl-mf5dw/pods/e2e-test-nginx-pod\",\n \"uid\": \"2c17a177-95d9-11ea-99e8-0242ac110002\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"docker.io/library/nginx:1.14-alpine\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-nginx-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"default-token-njvm5\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"hunter-worker\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"default-token-njvm5\",\n \"secret\": {\n \"defaultMode\": 420,\n \"secretName\": \"default-token-njvm5\"\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-05-14T11:50:56Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-05-14T11:51:00Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-05-14T11:51:00Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-05-14T11:50:56Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"containerd://b059d31543288eff6f1d09345bc274815f5de6f583ec26afe7cc1e47d99be4dd\",\n \"image\": \"docker.io/library/nginx:1.14-alpine\",\n \"imageID\": \"docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7\",\n \"lastState\": {},\n \"name\": \"e2e-test-nginx-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2020-05-14T11:50:59Z\"\n }\n }\n }\n ],\n \"hostIP\": \"172.17.0.3\",\n \"phase\": \"Running\",\n \"podIP\": \"10.244.1.53\",\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2020-05-14T11:50:56Z\"\n }\n}\n" STEP: replace the image in the pod May 14 11:51:02.009: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config replace -f - --namespace=e2e-tests-kubectl-mf5dw' May 14 11:51:02.248: INFO: stderr: "" May 14 11:51:02.248: INFO: stdout: "pod/e2e-test-nginx-pod replaced\n" STEP: verifying the pod e2e-test-nginx-pod has the right image docker.io/library/busybox:1.29 [AfterEach] [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1568 May 14 11:51:02.252: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=e2e-tests-kubectl-mf5dw' May 14 11:51:11.266: INFO: stderr: "" May 14 11:51:11.266: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 14 11:51:11.266: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-mf5dw" for this suite. May 14 11:51:17.280: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 11:51:17.330: INFO: namespace: e2e-tests-kubectl-mf5dw, resource: bindings, ignored listing per whitelist May 14 11:51:17.360: INFO: namespace e2e-tests-kubectl-mf5dw deletion completed in 6.090516629s • [SLOW TEST:20.716 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 14 11:51:17.360: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating the pod May 14 11:51:22.225: INFO: Successfully updated pod "annotationupdate3861898e-95d9-11ea-9b22-0242ac110018" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 14 11:51:24.292: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-xfrtw" for this suite. May 14 11:51:46.331: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 11:51:46.395: INFO: namespace: e2e-tests-downward-api-xfrtw, resource: bindings, ignored listing per whitelist May 14 11:51:46.404: INFO: namespace e2e-tests-downward-api-xfrtw deletion completed in 22.108362982s • [SLOW TEST:29.044 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 14 11:51:46.405: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-49bc03ea-95d9-11ea-9b22-0242ac110018 STEP: Creating a pod to test consume secrets May 14 11:51:46.662: INFO: Waiting up to 5m0s for pod "pod-secrets-49bf1564-95d9-11ea-9b22-0242ac110018" in namespace "e2e-tests-secrets-4p58t" to be "success or failure" May 14 11:51:46.673: INFO: Pod "pod-secrets-49bf1564-95d9-11ea-9b22-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 10.917238ms May 14 11:51:48.676: INFO: Pod "pod-secrets-49bf1564-95d9-11ea-9b22-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014123773s May 14 11:51:50.923: INFO: Pod "pod-secrets-49bf1564-95d9-11ea-9b22-0242ac110018": Phase="Running", Reason="", readiness=true. Elapsed: 4.261525942s May 14 11:51:52.927: INFO: Pod "pod-secrets-49bf1564-95d9-11ea-9b22-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.265393895s STEP: Saw pod success May 14 11:51:52.927: INFO: Pod "pod-secrets-49bf1564-95d9-11ea-9b22-0242ac110018" satisfied condition "success or failure" May 14 11:51:52.930: INFO: Trying to get logs from node hunter-worker2 pod pod-secrets-49bf1564-95d9-11ea-9b22-0242ac110018 container secret-volume-test: STEP: delete the pod May 14 11:51:53.099: INFO: Waiting for pod pod-secrets-49bf1564-95d9-11ea-9b22-0242ac110018 to disappear May 14 11:51:53.167: INFO: Pod pod-secrets-49bf1564-95d9-11ea-9b22-0242ac110018 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 14 11:51:53.167: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-4p58t" for this suite. May 14 11:51:59.194: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 11:51:59.239: INFO: namespace: e2e-tests-secrets-4p58t, resource: bindings, ignored listing per whitelist May 14 11:51:59.271: INFO: namespace e2e-tests-secrets-4p58t deletion completed in 6.100503064s • [SLOW TEST:12.867 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 14 11:51:59.272: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-515ffa72-95d9-11ea-9b22-0242ac110018 STEP: Creating a pod to test consume secrets May 14 11:51:59.427: INFO: Waiting up to 5m0s for pod "pod-secrets-51630f4a-95d9-11ea-9b22-0242ac110018" in namespace "e2e-tests-secrets-l77mk" to be "success or failure" May 14 11:51:59.430: INFO: Pod "pod-secrets-51630f4a-95d9-11ea-9b22-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 3.485993ms May 14 11:52:01.761: INFO: Pod "pod-secrets-51630f4a-95d9-11ea-9b22-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.334773881s May 14 11:52:03.898: INFO: Pod "pod-secrets-51630f4a-95d9-11ea-9b22-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 4.471294349s May 14 11:52:05.905: INFO: Pod "pod-secrets-51630f4a-95d9-11ea-9b22-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.47829484s STEP: Saw pod success May 14 11:52:05.905: INFO: Pod "pod-secrets-51630f4a-95d9-11ea-9b22-0242ac110018" satisfied condition "success or failure" May 14 11:52:05.907: INFO: Trying to get logs from node hunter-worker pod pod-secrets-51630f4a-95d9-11ea-9b22-0242ac110018 container secret-volume-test: STEP: delete the pod May 14 11:52:05.942: INFO: Waiting for pod pod-secrets-51630f4a-95d9-11ea-9b22-0242ac110018 to disappear May 14 11:52:05.957: INFO: Pod pod-secrets-51630f4a-95d9-11ea-9b22-0242ac110018 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 14 11:52:05.958: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-l77mk" for this suite. May 14 11:52:11.972: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 11:52:12.042: INFO: namespace: e2e-tests-secrets-l77mk, resource: bindings, ignored listing per whitelist May 14 11:52:12.049: INFO: namespace e2e-tests-secrets-l77mk deletion completed in 6.087924425s • [SLOW TEST:12.777 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 14 11:52:12.049: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should not write to root filesystem [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 14 11:52:18.399: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubelet-test-ts75r" for this suite. May 14 11:53:14.562: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 11:53:14.656: INFO: namespace: e2e-tests-kubelet-test-ts75r, resource: bindings, ignored listing per whitelist May 14 11:53:14.681: INFO: namespace e2e-tests-kubelet-test-ts75r deletion completed in 56.279150974s • [SLOW TEST:62.632 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when scheduling a read only busybox container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:186 should not write to root filesystem [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 14 11:53:14.681: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with secret that has name projected-secret-test-map-7e969b61-95d9-11ea-9b22-0242ac110018 STEP: Creating a pod to test consume secrets May 14 11:53:15.905: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-7ec57fc1-95d9-11ea-9b22-0242ac110018" in namespace "e2e-tests-projected-6pqts" to be "success or failure" May 14 11:53:15.965: INFO: Pod "pod-projected-secrets-7ec57fc1-95d9-11ea-9b22-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 59.462904ms May 14 11:53:18.104: INFO: Pod "pod-projected-secrets-7ec57fc1-95d9-11ea-9b22-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.198954813s May 14 11:53:20.108: INFO: Pod "pod-projected-secrets-7ec57fc1-95d9-11ea-9b22-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 4.202830502s May 14 11:53:22.112: INFO: Pod "pod-projected-secrets-7ec57fc1-95d9-11ea-9b22-0242ac110018": Phase="Running", Reason="", readiness=true. Elapsed: 6.20670482s May 14 11:53:24.115: INFO: Pod "pod-projected-secrets-7ec57fc1-95d9-11ea-9b22-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.209394339s STEP: Saw pod success May 14 11:53:24.115: INFO: Pod "pod-projected-secrets-7ec57fc1-95d9-11ea-9b22-0242ac110018" satisfied condition "success or failure" May 14 11:53:24.117: INFO: Trying to get logs from node hunter-worker pod pod-projected-secrets-7ec57fc1-95d9-11ea-9b22-0242ac110018 container projected-secret-volume-test: STEP: delete the pod May 14 11:53:24.145: INFO: Waiting for pod pod-projected-secrets-7ec57fc1-95d9-11ea-9b22-0242ac110018 to disappear May 14 11:53:24.229: INFO: Pod pod-projected-secrets-7ec57fc1-95d9-11ea-9b22-0242ac110018 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 14 11:53:24.229: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-6pqts" for this suite. May 14 11:53:30.250: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 11:53:30.288: INFO: namespace: e2e-tests-projected-6pqts, resource: bindings, ignored listing per whitelist May 14 11:53:30.326: INFO: namespace e2e-tests-projected-6pqts deletion completed in 6.094123996s • [SLOW TEST:15.644 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 14 11:53:30.326: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test use defaults May 14 11:53:30.482: INFO: Waiting up to 5m0s for pod "client-containers-87aa6dcd-95d9-11ea-9b22-0242ac110018" in namespace "e2e-tests-containers-ccfs5" to be "success or failure" May 14 11:53:30.600: INFO: Pod "client-containers-87aa6dcd-95d9-11ea-9b22-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 118.158703ms May 14 11:53:32.603: INFO: Pod "client-containers-87aa6dcd-95d9-11ea-9b22-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.121224119s May 14 11:53:34.607: INFO: Pod "client-containers-87aa6dcd-95d9-11ea-9b22-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 4.125397399s May 14 11:53:36.616: INFO: Pod "client-containers-87aa6dcd-95d9-11ea-9b22-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.134537684s STEP: Saw pod success May 14 11:53:36.616: INFO: Pod "client-containers-87aa6dcd-95d9-11ea-9b22-0242ac110018" satisfied condition "success or failure" May 14 11:53:36.622: INFO: Trying to get logs from node hunter-worker2 pod client-containers-87aa6dcd-95d9-11ea-9b22-0242ac110018 container test-container: STEP: delete the pod May 14 11:53:36.712: INFO: Waiting for pod client-containers-87aa6dcd-95d9-11ea-9b22-0242ac110018 to disappear May 14 11:53:36.748: INFO: Pod client-containers-87aa6dcd-95d9-11ea-9b22-0242ac110018 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 14 11:53:36.748: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-containers-ccfs5" for this suite. May 14 11:53:42.770: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 11:53:42.800: INFO: namespace: e2e-tests-containers-ccfs5, resource: bindings, ignored listing per whitelist May 14 11:53:42.844: INFO: namespace e2e-tests-containers-ccfs5 deletion completed in 6.092479557s • [SLOW TEST:12.518 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 14 11:53:42.844: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0666 on node default medium May 14 11:53:43.551: INFO: Waiting up to 5m0s for pod "pod-8f50a5b9-95d9-11ea-9b22-0242ac110018" in namespace "e2e-tests-emptydir-2wv7g" to be "success or failure" May 14 11:53:43.650: INFO: Pod "pod-8f50a5b9-95d9-11ea-9b22-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 98.709758ms May 14 11:53:45.654: INFO: Pod "pod-8f50a5b9-95d9-11ea-9b22-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.102650411s May 14 11:53:47.659: INFO: Pod "pod-8f50a5b9-95d9-11ea-9b22-0242ac110018": Phase="Running", Reason="", readiness=true. Elapsed: 4.107326595s May 14 11:53:49.663: INFO: Pod "pod-8f50a5b9-95d9-11ea-9b22-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.111993864s STEP: Saw pod success May 14 11:53:49.663: INFO: Pod "pod-8f50a5b9-95d9-11ea-9b22-0242ac110018" satisfied condition "success or failure" May 14 11:53:49.666: INFO: Trying to get logs from node hunter-worker pod pod-8f50a5b9-95d9-11ea-9b22-0242ac110018 container test-container: STEP: delete the pod May 14 11:53:49.813: INFO: Waiting for pod pod-8f50a5b9-95d9-11ea-9b22-0242ac110018 to disappear May 14 11:53:49.870: INFO: Pod pod-8f50a5b9-95d9-11ea-9b22-0242ac110018 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 14 11:53:49.870: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-2wv7g" for this suite. May 14 11:53:55.885: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 11:53:55.930: INFO: namespace: e2e-tests-emptydir-2wv7g, resource: bindings, ignored listing per whitelist May 14 11:53:55.955: INFO: namespace e2e-tests-emptydir-2wv7g deletion completed in 6.082063949s • [SLOW TEST:13.112 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 14 11:53:55.956: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102 [It] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 14 11:53:56.095: INFO: Creating daemon "daemon-set" with a node selector STEP: Initially, daemon pods should not be running on any nodes. May 14 11:53:56.122: INFO: Number of nodes with available pods: 0 May 14 11:53:56.122: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Change node label to blue, check that daemon pod is launched. May 14 11:53:56.190: INFO: Number of nodes with available pods: 0 May 14 11:53:56.190: INFO: Node hunter-worker is running more than one daemon pod May 14 11:53:58.398: INFO: Number of nodes with available pods: 0 May 14 11:53:58.398: INFO: Node hunter-worker is running more than one daemon pod May 14 11:53:59.393: INFO: Number of nodes with available pods: 0 May 14 11:53:59.393: INFO: Node hunter-worker is running more than one daemon pod May 14 11:54:00.194: INFO: Number of nodes with available pods: 0 May 14 11:54:00.194: INFO: Node hunter-worker is running more than one daemon pod May 14 11:54:01.231: INFO: Number of nodes with available pods: 1 May 14 11:54:01.231: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Update the node label to green, and wait for daemons to be unscheduled May 14 11:54:01.392: INFO: Number of nodes with available pods: 1 May 14 11:54:01.392: INFO: Number of running nodes: 0, number of available pods: 1 May 14 11:54:02.396: INFO: Number of nodes with available pods: 0 May 14 11:54:02.396: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate May 14 11:54:02.542: INFO: Number of nodes with available pods: 0 May 14 11:54:02.542: INFO: Node hunter-worker is running more than one daemon pod May 14 11:54:03.546: INFO: Number of nodes with available pods: 0 May 14 11:54:03.546: INFO: Node hunter-worker is running more than one daemon pod May 14 11:54:04.546: INFO: Number of nodes with available pods: 0 May 14 11:54:04.546: INFO: Node hunter-worker is running more than one daemon pod May 14 11:54:05.546: INFO: Number of nodes with available pods: 0 May 14 11:54:05.546: INFO: Node hunter-worker is running more than one daemon pod May 14 11:54:06.553: INFO: Number of nodes with available pods: 0 May 14 11:54:06.553: INFO: Node hunter-worker is running more than one daemon pod May 14 11:54:07.546: INFO: Number of nodes with available pods: 0 May 14 11:54:07.546: INFO: Node hunter-worker is running more than one daemon pod May 14 11:54:08.564: INFO: Number of nodes with available pods: 0 May 14 11:54:08.564: INFO: Node hunter-worker is running more than one daemon pod May 14 11:54:09.710: INFO: Number of nodes with available pods: 0 May 14 11:54:09.710: INFO: Node hunter-worker is running more than one daemon pod May 14 11:54:10.546: INFO: Number of nodes with available pods: 0 May 14 11:54:10.546: INFO: Node hunter-worker is running more than one daemon pod May 14 11:54:11.559: INFO: Number of nodes with available pods: 0 May 14 11:54:11.559: INFO: Node hunter-worker is running more than one daemon pod May 14 11:54:12.547: INFO: Number of nodes with available pods: 1 May 14 11:54:12.547: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-td57b, will wait for the garbage collector to delete the pods May 14 11:54:12.646: INFO: Deleting DaemonSet.extensions daemon-set took: 40.468989ms May 14 11:54:12.846: INFO: Terminating DaemonSet.extensions daemon-set pods took: 200.296127ms May 14 11:54:21.356: INFO: Number of nodes with available pods: 0 May 14 11:54:21.356: INFO: Number of running nodes: 0, number of available pods: 0 May 14 11:54:21.359: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-td57b/daemonsets","resourceVersion":"10527471"},"items":null} May 14 11:54:21.361: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-td57b/pods","resourceVersion":"10527471"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 14 11:54:21.394: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-daemonsets-td57b" for this suite. May 14 11:54:27.428: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 11:54:27.507: INFO: namespace: e2e-tests-daemonsets-td57b, resource: bindings, ignored listing per whitelist May 14 11:54:27.532: INFO: namespace e2e-tests-daemonsets-td57b deletion completed in 6.134418299s • [SLOW TEST:31.576 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 14 11:54:27.532: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpa': should get the expected 'State' STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpof': should get the expected 'State' STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpn': should get the expected 'State' STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance] [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 14 11:55:01.591: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-runtime-22qtt" for this suite. May 14 11:55:09.624: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 11:55:09.718: INFO: namespace: e2e-tests-container-runtime-22qtt, resource: bindings, ignored listing per whitelist May 14 11:55:09.734: INFO: namespace e2e-tests-container-runtime-22qtt deletion completed in 8.139456537s • [SLOW TEST:42.202 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:37 when starting a container that exits /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 14 11:55:09.734: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a watch on configmaps with a certain label STEP: creating a new configmap STEP: modifying the configmap once STEP: changing the label value of the configmap STEP: Expecting to observe a delete notification for the watched object May 14 11:55:10.246: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-zkcm5,SelfLink:/api/v1/namespaces/e2e-tests-watch-zkcm5/configmaps/e2e-watch-test-label-changed,UID:c308155e-95d9-11ea-99e8-0242ac110002,ResourceVersion:10527655,Generation:0,CreationTimestamp:2020-05-14 11:55:10 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} May 14 11:55:10.246: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-zkcm5,SelfLink:/api/v1/namespaces/e2e-tests-watch-zkcm5/configmaps/e2e-watch-test-label-changed,UID:c308155e-95d9-11ea-99e8-0242ac110002,ResourceVersion:10527656,Generation:0,CreationTimestamp:2020-05-14 11:55:10 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} May 14 11:55:10.246: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-zkcm5,SelfLink:/api/v1/namespaces/e2e-tests-watch-zkcm5/configmaps/e2e-watch-test-label-changed,UID:c308155e-95d9-11ea-99e8-0242ac110002,ResourceVersion:10527657,Generation:0,CreationTimestamp:2020-05-14 11:55:10 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements STEP: changing the label value of the configmap back STEP: modifying the configmap a third time STEP: deleting the configmap STEP: Expecting to observe an add notification for the watched object when the label value was restored May 14 11:55:20.269: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-zkcm5,SelfLink:/api/v1/namespaces/e2e-tests-watch-zkcm5/configmaps/e2e-watch-test-label-changed,UID:c308155e-95d9-11ea-99e8-0242ac110002,ResourceVersion:10527678,Generation:0,CreationTimestamp:2020-05-14 11:55:10 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} May 14 11:55:20.270: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-zkcm5,SelfLink:/api/v1/namespaces/e2e-tests-watch-zkcm5/configmaps/e2e-watch-test-label-changed,UID:c308155e-95d9-11ea-99e8-0242ac110002,ResourceVersion:10527679,Generation:0,CreationTimestamp:2020-05-14 11:55:10 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} May 14 11:55:20.270: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-zkcm5,SelfLink:/api/v1/namespaces/e2e-tests-watch-zkcm5/configmaps/e2e-watch-test-label-changed,UID:c308155e-95d9-11ea-99e8-0242ac110002,ResourceVersion:10527680,Generation:0,CreationTimestamp:2020-05-14 11:55:10 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 14 11:55:20.270: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-watch-zkcm5" for this suite. May 14 11:55:26.507: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 11:55:26.544: INFO: namespace: e2e-tests-watch-zkcm5, resource: bindings, ignored listing per whitelist May 14 11:55:26.569: INFO: namespace e2e-tests-watch-zkcm5 deletion completed in 6.294335773s • [SLOW TEST:16.835 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 14 11:55:26.569: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name cm-test-opt-del-cd1a077f-95d9-11ea-9b22-0242ac110018 STEP: Creating configMap with name cm-test-opt-upd-cd1a0802-95d9-11ea-9b22-0242ac110018 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-cd1a077f-95d9-11ea-9b22-0242ac110018 STEP: Updating configmap cm-test-opt-upd-cd1a0802-95d9-11ea-9b22-0242ac110018 STEP: Creating configMap with name cm-test-opt-create-cd1a082f-95d9-11ea-9b22-0242ac110018 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 14 11:56:47.534: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-tbscm" for this suite. May 14 11:57:11.650: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 11:57:11.664: INFO: namespace: e2e-tests-projected-tbscm, resource: bindings, ignored listing per whitelist May 14 11:57:11.872: INFO: namespace e2e-tests-projected-tbscm deletion completed in 24.33521652s • [SLOW TEST:105.302 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 14 11:57:11.872: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with configMap that has name projected-configmap-test-upd-0c8e2036-95da-11ea-9b22-0242ac110018 STEP: Creating the pod STEP: Updating configmap projected-configmap-test-upd-0c8e2036-95da-11ea-9b22-0242ac110018 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 14 11:57:22.565: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-bn22x" for this suite. May 14 11:57:44.615: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 11:57:44.637: INFO: namespace: e2e-tests-projected-bn22x, resource: bindings, ignored listing per whitelist May 14 11:57:44.695: INFO: namespace e2e-tests-projected-bn22x deletion completed in 22.12558307s • [SLOW TEST:32.823 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 14 11:57:44.695: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with projected pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod pod-subpath-test-projected-tcqj STEP: Creating a pod to test atomic-volume-subpath May 14 11:57:44.817: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-tcqj" in namespace "e2e-tests-subpath-vhcfv" to be "success or failure" May 14 11:57:44.858: INFO: Pod "pod-subpath-test-projected-tcqj": Phase="Pending", Reason="", readiness=false. Elapsed: 40.049579ms May 14 11:57:46.863: INFO: Pod "pod-subpath-test-projected-tcqj": Phase="Pending", Reason="", readiness=false. Elapsed: 2.045134173s May 14 11:57:48.867: INFO: Pod "pod-subpath-test-projected-tcqj": Phase="Pending", Reason="", readiness=false. Elapsed: 4.049154476s May 14 11:57:50.871: INFO: Pod "pod-subpath-test-projected-tcqj": Phase="Pending", Reason="", readiness=false. Elapsed: 6.053104989s May 14 11:57:53.049: INFO: Pod "pod-subpath-test-projected-tcqj": Phase="Running", Reason="", readiness=true. Elapsed: 8.231714997s May 14 11:57:55.053: INFO: Pod "pod-subpath-test-projected-tcqj": Phase="Running", Reason="", readiness=false. Elapsed: 10.235659797s May 14 11:57:57.058: INFO: Pod "pod-subpath-test-projected-tcqj": Phase="Running", Reason="", readiness=false. Elapsed: 12.240060482s May 14 11:57:59.062: INFO: Pod "pod-subpath-test-projected-tcqj": Phase="Running", Reason="", readiness=false. Elapsed: 14.244981298s May 14 11:58:01.066: INFO: Pod "pod-subpath-test-projected-tcqj": Phase="Running", Reason="", readiness=false. Elapsed: 16.248505162s May 14 11:58:03.069: INFO: Pod "pod-subpath-test-projected-tcqj": Phase="Running", Reason="", readiness=false. Elapsed: 18.252000225s May 14 11:58:05.073: INFO: Pod "pod-subpath-test-projected-tcqj": Phase="Running", Reason="", readiness=false. Elapsed: 20.255711571s May 14 11:58:07.077: INFO: Pod "pod-subpath-test-projected-tcqj": Phase="Running", Reason="", readiness=false. Elapsed: 22.259848159s May 14 11:58:09.082: INFO: Pod "pod-subpath-test-projected-tcqj": Phase="Running", Reason="", readiness=false. Elapsed: 24.26465032s May 14 11:58:11.086: INFO: Pod "pod-subpath-test-projected-tcqj": Phase="Running", Reason="", readiness=false. Elapsed: 26.268832547s May 14 11:58:13.092: INFO: Pod "pod-subpath-test-projected-tcqj": Phase="Succeeded", Reason="", readiness=false. Elapsed: 28.274123486s STEP: Saw pod success May 14 11:58:13.092: INFO: Pod "pod-subpath-test-projected-tcqj" satisfied condition "success or failure" May 14 11:58:13.095: INFO: Trying to get logs from node hunter-worker pod pod-subpath-test-projected-tcqj container test-container-subpath-projected-tcqj: STEP: delete the pod May 14 11:58:13.132: INFO: Waiting for pod pod-subpath-test-projected-tcqj to disappear May 14 11:58:13.160: INFO: Pod pod-subpath-test-projected-tcqj no longer exists STEP: Deleting pod pod-subpath-test-projected-tcqj May 14 11:58:13.160: INFO: Deleting pod "pod-subpath-test-projected-tcqj" in namespace "e2e-tests-subpath-vhcfv" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 14 11:58:13.162: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-subpath-vhcfv" for this suite. May 14 11:58:19.199: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 11:58:19.237: INFO: namespace: e2e-tests-subpath-vhcfv, resource: bindings, ignored listing per whitelist May 14 11:58:19.267: INFO: namespace e2e-tests-subpath-vhcfv deletion completed in 6.101492964s • [SLOW TEST:34.572 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with projected pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 14 11:58:19.267: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-33decc72-95da-11ea-9b22-0242ac110018 STEP: Creating a pod to test consume secrets May 14 11:58:19.442: INFO: Waiting up to 5m0s for pod "pod-secrets-33e3e873-95da-11ea-9b22-0242ac110018" in namespace "e2e-tests-secrets-t5q8d" to be "success or failure" May 14 11:58:19.468: INFO: Pod "pod-secrets-33e3e873-95da-11ea-9b22-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 26.729364ms May 14 11:58:21.487: INFO: Pod "pod-secrets-33e3e873-95da-11ea-9b22-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.045270626s May 14 11:58:23.540: INFO: Pod "pod-secrets-33e3e873-95da-11ea-9b22-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 4.098681287s May 14 11:58:25.547: INFO: Pod "pod-secrets-33e3e873-95da-11ea-9b22-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.104880029s STEP: Saw pod success May 14 11:58:25.547: INFO: Pod "pod-secrets-33e3e873-95da-11ea-9b22-0242ac110018" satisfied condition "success or failure" May 14 11:58:25.549: INFO: Trying to get logs from node hunter-worker2 pod pod-secrets-33e3e873-95da-11ea-9b22-0242ac110018 container secret-volume-test: STEP: delete the pod May 14 11:58:25.597: INFO: Waiting for pod pod-secrets-33e3e873-95da-11ea-9b22-0242ac110018 to disappear May 14 11:58:25.613: INFO: Pod pod-secrets-33e3e873-95da-11ea-9b22-0242ac110018 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 14 11:58:25.613: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-t5q8d" for this suite. May 14 11:58:31.628: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 11:58:31.645: INFO: namespace: e2e-tests-secrets-t5q8d, resource: bindings, ignored listing per whitelist May 14 11:58:31.708: INFO: namespace e2e-tests-secrets-t5q8d deletion completed in 6.091596308s • [SLOW TEST:12.441 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 14 11:58:31.708: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-map-3b447e44-95da-11ea-9b22-0242ac110018 STEP: Creating a pod to test consume configMaps May 14 11:58:31.846: INFO: Waiting up to 5m0s for pod "pod-configmaps-3b45b78a-95da-11ea-9b22-0242ac110018" in namespace "e2e-tests-configmap-dj8s2" to be "success or failure" May 14 11:58:31.865: INFO: Pod "pod-configmaps-3b45b78a-95da-11ea-9b22-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 19.888654ms May 14 11:58:33.870: INFO: Pod "pod-configmaps-3b45b78a-95da-11ea-9b22-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024930899s May 14 11:58:35.873: INFO: Pod "pod-configmaps-3b45b78a-95da-11ea-9b22-0242ac110018": Phase="Running", Reason="", readiness=true. Elapsed: 4.027653206s May 14 11:58:37.876: INFO: Pod "pod-configmaps-3b45b78a-95da-11ea-9b22-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.030605569s STEP: Saw pod success May 14 11:58:37.876: INFO: Pod "pod-configmaps-3b45b78a-95da-11ea-9b22-0242ac110018" satisfied condition "success or failure" May 14 11:58:37.878: INFO: Trying to get logs from node hunter-worker2 pod pod-configmaps-3b45b78a-95da-11ea-9b22-0242ac110018 container configmap-volume-test: STEP: delete the pod May 14 11:58:37.962: INFO: Waiting for pod pod-configmaps-3b45b78a-95da-11ea-9b22-0242ac110018 to disappear May 14 11:58:37.991: INFO: Pod pod-configmaps-3b45b78a-95da-11ea-9b22-0242ac110018 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 14 11:58:37.991: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-dj8s2" for this suite. May 14 11:58:44.152: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 11:58:44.187: INFO: namespace: e2e-tests-configmap-dj8s2, resource: bindings, ignored listing per whitelist May 14 11:58:44.210: INFO: namespace e2e-tests-configmap-dj8s2 deletion completed in 6.215468157s • [SLOW TEST:12.502 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 14 11:58:44.210: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-42d2fa26-95da-11ea-9b22-0242ac110018 STEP: Creating a pod to test consume configMaps May 14 11:58:44.895: INFO: Waiting up to 5m0s for pod "pod-configmaps-43105915-95da-11ea-9b22-0242ac110018" in namespace "e2e-tests-configmap-c78q6" to be "success or failure" May 14 11:58:45.045: INFO: Pod "pod-configmaps-43105915-95da-11ea-9b22-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 150.432826ms May 14 11:58:47.048: INFO: Pod "pod-configmaps-43105915-95da-11ea-9b22-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.15351419s May 14 11:58:49.051: INFO: Pod "pod-configmaps-43105915-95da-11ea-9b22-0242ac110018": Phase="Running", Reason="", readiness=true. Elapsed: 4.156706636s May 14 11:58:51.055: INFO: Pod "pod-configmaps-43105915-95da-11ea-9b22-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.160513667s STEP: Saw pod success May 14 11:58:51.055: INFO: Pod "pod-configmaps-43105915-95da-11ea-9b22-0242ac110018" satisfied condition "success or failure" May 14 11:58:51.059: INFO: Trying to get logs from node hunter-worker pod pod-configmaps-43105915-95da-11ea-9b22-0242ac110018 container configmap-volume-test: STEP: delete the pod May 14 11:58:51.793: INFO: Waiting for pod pod-configmaps-43105915-95da-11ea-9b22-0242ac110018 to disappear May 14 11:58:52.188: INFO: Pod pod-configmaps-43105915-95da-11ea-9b22-0242ac110018 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 14 11:58:52.188: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-c78q6" for this suite. May 14 11:59:01.223: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 11:59:01.240: INFO: namespace: e2e-tests-configmap-c78q6, resource: bindings, ignored listing per whitelist May 14 11:59:01.284: INFO: namespace e2e-tests-configmap-c78q6 deletion completed in 9.092254419s • [SLOW TEST:17.074 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-cli] Kubectl client [k8s.io] Proxy server should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 14 11:59:01.284: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: starting the proxy server May 14 11:59:01.902: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter' STEP: curling proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 14 11:59:02.021: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-lg7kv" for this suite. May 14 11:59:08.621: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 11:59:08.639: INFO: namespace: e2e-tests-kubectl-lg7kv, resource: bindings, ignored listing per whitelist May 14 11:59:08.683: INFO: namespace e2e-tests-kubectl-lg7kv deletion completed in 6.566517046s • [SLOW TEST:7.399 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Proxy server /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 14 11:59:08.683: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin May 14 11:59:08.842: INFO: Waiting up to 5m0s for pod "downwardapi-volume-5152688e-95da-11ea-9b22-0242ac110018" in namespace "e2e-tests-downward-api-4xn8n" to be "success or failure" May 14 11:59:08.854: INFO: Pod "downwardapi-volume-5152688e-95da-11ea-9b22-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 11.416399ms May 14 11:59:10.894: INFO: Pod "downwardapi-volume-5152688e-95da-11ea-9b22-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.051844487s May 14 11:59:12.898: INFO: Pod "downwardapi-volume-5152688e-95da-11ea-9b22-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.05600479s STEP: Saw pod success May 14 11:59:12.898: INFO: Pod "downwardapi-volume-5152688e-95da-11ea-9b22-0242ac110018" satisfied condition "success or failure" May 14 11:59:12.901: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-5152688e-95da-11ea-9b22-0242ac110018 container client-container: STEP: delete the pod May 14 11:59:12.920: INFO: Waiting for pod downwardapi-volume-5152688e-95da-11ea-9b22-0242ac110018 to disappear May 14 11:59:12.978: INFO: Pod downwardapi-volume-5152688e-95da-11ea-9b22-0242ac110018 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 14 11:59:12.979: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-4xn8n" for this suite. May 14 11:59:19.000: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 11:59:19.014: INFO: namespace: e2e-tests-downward-api-4xn8n, resource: bindings, ignored listing per whitelist May 14 11:59:19.078: INFO: namespace e2e-tests-downward-api-4xn8n deletion completed in 6.096820169s • [SLOW TEST:10.395 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 14 11:59:19.079: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin May 14 11:59:19.239: INFO: Waiting up to 5m0s for pod "downwardapi-volume-57890636-95da-11ea-9b22-0242ac110018" in namespace "e2e-tests-downward-api-g9ffq" to be "success or failure" May 14 11:59:19.331: INFO: Pod "downwardapi-volume-57890636-95da-11ea-9b22-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 92.446412ms May 14 11:59:21.512: INFO: Pod "downwardapi-volume-57890636-95da-11ea-9b22-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.272746244s May 14 11:59:23.514: INFO: Pod "downwardapi-volume-57890636-95da-11ea-9b22-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.275504287s STEP: Saw pod success May 14 11:59:23.515: INFO: Pod "downwardapi-volume-57890636-95da-11ea-9b22-0242ac110018" satisfied condition "success or failure" May 14 11:59:23.516: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-57890636-95da-11ea-9b22-0242ac110018 container client-container: STEP: delete the pod May 14 11:59:23.531: INFO: Waiting for pod downwardapi-volume-57890636-95da-11ea-9b22-0242ac110018 to disappear May 14 11:59:23.536: INFO: Pod downwardapi-volume-57890636-95da-11ea-9b22-0242ac110018 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 14 11:59:23.536: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-g9ffq" for this suite. May 14 11:59:29.621: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 11:59:29.670: INFO: namespace: e2e-tests-downward-api-g9ffq, resource: bindings, ignored listing per whitelist May 14 11:59:29.673: INFO: namespace e2e-tests-downward-api-g9ffq deletion completed in 6.133797386s • [SLOW TEST:10.594 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run deployment should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 14 11:59:29.673: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1399 [It] should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine May 14 11:59:29.897: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --generator=deployment/v1beta1 --namespace=e2e-tests-kubectl-pgn6z' May 14 11:59:35.271: INFO: stderr: "kubectl run --generator=deployment/v1beta1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" May 14 11:59:35.271: INFO: stdout: "deployment.extensions/e2e-test-nginx-deployment created\n" STEP: verifying the deployment e2e-test-nginx-deployment was created STEP: verifying the pod controlled by deployment e2e-test-nginx-deployment was created [AfterEach] [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1404 May 14 11:59:39.446: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=e2e-tests-kubectl-pgn6z' May 14 11:59:39.565: INFO: stderr: "" May 14 11:59:39.565: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 14 11:59:39.565: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-pgn6z" for this suite. May 14 11:59:45.582: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 11:59:45.605: INFO: namespace: e2e-tests-kubectl-pgn6z, resource: bindings, ignored listing per whitelist May 14 11:59:45.656: INFO: namespace e2e-tests-kubectl-pgn6z deletion completed in 6.086739233s • [SLOW TEST:15.983 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 14 11:59:45.656: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with configmap pod with mountPath of existing file [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod pod-subpath-test-configmap-6c4n STEP: Creating a pod to test atomic-volume-subpath May 14 11:59:45.786: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-6c4n" in namespace "e2e-tests-subpath-llxj2" to be "success or failure" May 14 11:59:45.818: INFO: Pod "pod-subpath-test-configmap-6c4n": Phase="Pending", Reason="", readiness=false. Elapsed: 31.747882ms May 14 11:59:47.822: INFO: Pod "pod-subpath-test-configmap-6c4n": Phase="Pending", Reason="", readiness=false. Elapsed: 2.035913492s May 14 11:59:49.827: INFO: Pod "pod-subpath-test-configmap-6c4n": Phase="Pending", Reason="", readiness=false. Elapsed: 4.040818171s May 14 11:59:51.830: INFO: Pod "pod-subpath-test-configmap-6c4n": Phase="Pending", Reason="", readiness=false. Elapsed: 6.044510719s May 14 11:59:53.834: INFO: Pod "pod-subpath-test-configmap-6c4n": Phase="Running", Reason="", readiness=false. Elapsed: 8.048687473s May 14 11:59:55.839: INFO: Pod "pod-subpath-test-configmap-6c4n": Phase="Running", Reason="", readiness=false. Elapsed: 10.053589389s May 14 11:59:57.844: INFO: Pod "pod-subpath-test-configmap-6c4n": Phase="Running", Reason="", readiness=false. Elapsed: 12.057921217s May 14 11:59:59.849: INFO: Pod "pod-subpath-test-configmap-6c4n": Phase="Running", Reason="", readiness=false. Elapsed: 14.063105522s May 14 12:00:01.853: INFO: Pod "pod-subpath-test-configmap-6c4n": Phase="Running", Reason="", readiness=false. Elapsed: 16.067470787s May 14 12:00:03.856: INFO: Pod "pod-subpath-test-configmap-6c4n": Phase="Running", Reason="", readiness=false. Elapsed: 18.07049797s May 14 12:00:05.859: INFO: Pod "pod-subpath-test-configmap-6c4n": Phase="Running", Reason="", readiness=false. Elapsed: 20.073586869s May 14 12:00:07.863: INFO: Pod "pod-subpath-test-configmap-6c4n": Phase="Running", Reason="", readiness=false. Elapsed: 22.077596957s May 14 12:00:09.867: INFO: Pod "pod-subpath-test-configmap-6c4n": Phase="Running", Reason="", readiness=false. Elapsed: 24.0814767s May 14 12:00:11.872: INFO: Pod "pod-subpath-test-configmap-6c4n": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.085706845s STEP: Saw pod success May 14 12:00:11.872: INFO: Pod "pod-subpath-test-configmap-6c4n" satisfied condition "success or failure" May 14 12:00:11.874: INFO: Trying to get logs from node hunter-worker2 pod pod-subpath-test-configmap-6c4n container test-container-subpath-configmap-6c4n: STEP: delete the pod May 14 12:00:11.894: INFO: Waiting for pod pod-subpath-test-configmap-6c4n to disappear May 14 12:00:11.931: INFO: Pod pod-subpath-test-configmap-6c4n no longer exists STEP: Deleting pod pod-subpath-test-configmap-6c4n May 14 12:00:11.931: INFO: Deleting pod "pod-subpath-test-configmap-6c4n" in namespace "e2e-tests-subpath-llxj2" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 14 12:00:11.934: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-subpath-llxj2" for this suite. May 14 12:00:17.954: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 12:00:18.000: INFO: namespace: e2e-tests-subpath-llxj2, resource: bindings, ignored listing per whitelist May 14 12:00:18.019: INFO: namespace e2e-tests-subpath-llxj2 deletion completed in 6.081360727s • [SLOW TEST:32.363 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with configmap pod with mountPath of existing file [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 14 12:00:18.020: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: retrieving the pod May 14 12:00:22.287: INFO: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:send-events-7aaee0b8-95da-11ea-9b22-0242ac110018,GenerateName:,Namespace:e2e-tests-events-wbw52,SelfLink:/api/v1/namespaces/e2e-tests-events-wbw52/pods/send-events-7aaee0b8-95da-11ea-9b22-0242ac110018,UID:7aaf7a99-95da-11ea-99e8-0242ac110002,ResourceVersion:10528596,Generation:0,CreationTimestamp:2020-05-14 12:00:18 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: foo,time: 191181230,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-wj7d9 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-wj7d9,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{p gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 [] [] [{ 0 80 TCP }] [] [] {map[] map[]} [{default-token-wj7d9 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001a96060} {node.kubernetes.io/unreachable Exists NoExecute 0xc001a96850}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 12:00:18 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 12:00:20 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 12:00:20 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 12:00:18 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.4,PodIP:10.244.2.152,StartTime:2020-05-14 12:00:18 +0000 UTC,ContainerStatuses:[{p {nil ContainerStateRunning{StartedAt:2020-05-14 12:00:20 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 gcr.io/kubernetes-e2e-test-images/serve-hostname@sha256:bab70473a6d8ef65a22625dc9a1b0f0452e811530fdbe77e4408523460177ff1 containerd://5ef9fcbcf77251c725c34a32d622a67225ac9ac00424d93791b256145fcee42d}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} STEP: checking for scheduler event about the pod May 14 12:00:24.291: INFO: Saw scheduler event for our pod. STEP: checking for kubelet event about the pod May 14 12:00:26.296: INFO: Saw kubelet event for our pod. STEP: deleting the pod [AfterEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 14 12:00:26.301: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-events-wbw52" for this suite. May 14 12:01:06.371: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 12:01:06.383: INFO: namespace: e2e-tests-events-wbw52, resource: bindings, ignored listing per whitelist May 14 12:01:06.435: INFO: namespace e2e-tests-events-wbw52 deletion completed in 40.117048256s • [SLOW TEST:48.415 seconds] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 14 12:01:06.435: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name cm-test-opt-del-977bb70c-95da-11ea-9b22-0242ac110018 STEP: Creating configMap with name cm-test-opt-upd-977bb747-95da-11ea-9b22-0242ac110018 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-977bb70c-95da-11ea-9b22-0242ac110018 STEP: Updating configmap cm-test-opt-upd-977bb747-95da-11ea-9b22-0242ac110018 STEP: Creating configMap with name cm-test-opt-create-977bb758-95da-11ea-9b22-0242ac110018 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 14 12:01:20.740: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-cqzl2" for this suite. May 14 12:01:44.777: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 12:01:44.796: INFO: namespace: e2e-tests-configmap-cqzl2, resource: bindings, ignored listing per whitelist May 14 12:01:44.832: INFO: namespace e2e-tests-configmap-cqzl2 deletion completed in 24.088932814s • [SLOW TEST:38.397 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] Projected downwardAPI should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 14 12:01:44.832: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin May 14 12:01:45.003: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ae6c52d7-95da-11ea-9b22-0242ac110018" in namespace "e2e-tests-projected-trwlp" to be "success or failure" May 14 12:01:45.076: INFO: Pod "downwardapi-volume-ae6c52d7-95da-11ea-9b22-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 73.239443ms May 14 12:01:47.080: INFO: Pod "downwardapi-volume-ae6c52d7-95da-11ea-9b22-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.076965147s May 14 12:01:49.084: INFO: Pod "downwardapi-volume-ae6c52d7-95da-11ea-9b22-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 4.081051805s May 14 12:01:51.257: INFO: Pod "downwardapi-volume-ae6c52d7-95da-11ea-9b22-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 6.253820802s May 14 12:01:53.715: INFO: Pod "downwardapi-volume-ae6c52d7-95da-11ea-9b22-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 8.712205568s May 14 12:01:55.867: INFO: Pod "downwardapi-volume-ae6c52d7-95da-11ea-9b22-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 10.863911721s May 14 12:01:57.951: INFO: Pod "downwardapi-volume-ae6c52d7-95da-11ea-9b22-0242ac110018": Phase="Running", Reason="", readiness=true. Elapsed: 12.947859833s May 14 12:02:00.232: INFO: Pod "downwardapi-volume-ae6c52d7-95da-11ea-9b22-0242ac110018": Phase="Running", Reason="", readiness=true. Elapsed: 15.229156674s May 14 12:02:02.632: INFO: Pod "downwardapi-volume-ae6c52d7-95da-11ea-9b22-0242ac110018": Phase="Running", Reason="", readiness=true. Elapsed: 17.628761417s May 14 12:02:07.069: INFO: Pod "downwardapi-volume-ae6c52d7-95da-11ea-9b22-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 22.065762169s STEP: Saw pod success May 14 12:02:07.069: INFO: Pod "downwardapi-volume-ae6c52d7-95da-11ea-9b22-0242ac110018" satisfied condition "success or failure" May 14 12:02:07.154: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-ae6c52d7-95da-11ea-9b22-0242ac110018 container client-container: STEP: delete the pod May 14 12:02:07.581: INFO: Waiting for pod downwardapi-volume-ae6c52d7-95da-11ea-9b22-0242ac110018 to disappear May 14 12:02:07.619: INFO: Pod downwardapi-volume-ae6c52d7-95da-11ea-9b22-0242ac110018 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 14 12:02:07.619: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-trwlp" for this suite. May 14 12:02:28.240: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 12:02:29.719: INFO: namespace: e2e-tests-projected-trwlp, resource: bindings, ignored listing per whitelist May 14 12:02:29.758: INFO: namespace e2e-tests-projected-trwlp deletion completed in 22.13470855s • [SLOW TEST:44.926 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 14 12:02:29.758: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 14 12:02:33.070: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"ca0b6e80-95da-11ea-99e8-0242ac110002", Controller:(*bool)(0xc0020351ea), BlockOwnerDeletion:(*bool)(0xc0020351eb)}} May 14 12:02:33.681: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"c9568727-95da-11ea-99e8-0242ac110002", Controller:(*bool)(0xc001a94262), BlockOwnerDeletion:(*bool)(0xc001a94263)}} May 14 12:02:34.683: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"c957028a-95da-11ea-99e8-0242ac110002", Controller:(*bool)(0xc002035432), BlockOwnerDeletion:(*bool)(0xc002035433)}} [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 14 12:02:47.020: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-9qkzn" for this suite. May 14 12:02:53.122: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 12:02:53.139: INFO: namespace: e2e-tests-gc-9qkzn, resource: bindings, ignored listing per whitelist May 14 12:02:53.181: INFO: namespace e2e-tests-gc-9qkzn deletion completed in 6.158426055s • [SLOW TEST:23.423 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 14 12:02:53.181: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a pod in the namespace STEP: Waiting for the pod to have running status STEP: Creating an uninitialized pod in the namespace May 14 12:02:57.388: INFO: error from create uninitialized namespace: STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there are no pods in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 14 12:03:33.484: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-namespaces-sbjkt" for this suite. May 14 12:03:39.569: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 12:03:39.611: INFO: namespace: e2e-tests-namespaces-sbjkt, resource: bindings, ignored listing per whitelist May 14 12:03:39.639: INFO: namespace e2e-tests-namespaces-sbjkt deletion completed in 6.15164955s STEP: Destroying namespace "e2e-tests-nsdeletetest-fd9tm" for this suite. May 14 12:03:39.641: INFO: Namespace e2e-tests-nsdeletetest-fd9tm was already deleted STEP: Destroying namespace "e2e-tests-nsdeletetest-kwsfh" for this suite. May 14 12:03:47.892: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 12:03:47.926: INFO: namespace: e2e-tests-nsdeletetest-kwsfh, resource: bindings, ignored listing per whitelist May 14 12:03:47.953: INFO: namespace e2e-tests-nsdeletetest-kwsfh deletion completed in 8.31203936s • [SLOW TEST:54.771 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 14 12:03:47.953: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:295 [It] should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the initial replication controller May 14 12:03:48.068: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-lc94l' May 14 12:03:48.314: INFO: stderr: "" May 14 12:03:48.314: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. May 14 12:03:48.314: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-lc94l' May 14 12:03:48.433: INFO: stderr: "" May 14 12:03:48.434: INFO: stdout: "update-demo-nautilus-qtsw8 update-demo-nautilus-rkhtw " May 14 12:03:48.434: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-qtsw8 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-lc94l' May 14 12:03:48.526: INFO: stderr: "" May 14 12:03:48.526: INFO: stdout: "" May 14 12:03:48.526: INFO: update-demo-nautilus-qtsw8 is created but not running May 14 12:03:53.527: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-lc94l' May 14 12:03:53.648: INFO: stderr: "" May 14 12:03:53.648: INFO: stdout: "update-demo-nautilus-qtsw8 update-demo-nautilus-rkhtw " May 14 12:03:53.648: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-qtsw8 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-lc94l' May 14 12:03:53.751: INFO: stderr: "" May 14 12:03:53.751: INFO: stdout: "" May 14 12:03:53.751: INFO: update-demo-nautilus-qtsw8 is created but not running May 14 12:03:58.751: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-lc94l' May 14 12:04:00.268: INFO: stderr: "" May 14 12:04:00.268: INFO: stdout: "update-demo-nautilus-qtsw8 update-demo-nautilus-rkhtw " May 14 12:04:00.268: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-qtsw8 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-lc94l' May 14 12:04:00.463: INFO: stderr: "" May 14 12:04:00.463: INFO: stdout: "" May 14 12:04:00.463: INFO: update-demo-nautilus-qtsw8 is created but not running May 14 12:04:05.463: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-lc94l' May 14 12:04:06.365: INFO: stderr: "" May 14 12:04:06.365: INFO: stdout: "update-demo-nautilus-qtsw8 update-demo-nautilus-rkhtw " May 14 12:04:06.365: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-qtsw8 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-lc94l' May 14 12:04:06.823: INFO: stderr: "" May 14 12:04:06.823: INFO: stdout: "" May 14 12:04:06.823: INFO: update-demo-nautilus-qtsw8 is created but not running May 14 12:04:11.823: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-lc94l' May 14 12:04:12.120: INFO: stderr: "" May 14 12:04:12.120: INFO: stdout: "update-demo-nautilus-qtsw8 update-demo-nautilus-rkhtw " May 14 12:04:12.120: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-qtsw8 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-lc94l' May 14 12:04:12.222: INFO: stderr: "" May 14 12:04:12.222: INFO: stdout: "" May 14 12:04:12.222: INFO: update-demo-nautilus-qtsw8 is created but not running May 14 12:04:17.222: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-lc94l' May 14 12:04:18.223: INFO: stderr: "" May 14 12:04:18.223: INFO: stdout: "update-demo-nautilus-qtsw8 update-demo-nautilus-rkhtw " May 14 12:04:18.223: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-qtsw8 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-lc94l' May 14 12:04:18.359: INFO: stderr: "" May 14 12:04:18.359: INFO: stdout: "" May 14 12:04:18.359: INFO: update-demo-nautilus-qtsw8 is created but not running May 14 12:04:23.359: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-lc94l' May 14 12:04:23.632: INFO: stderr: "" May 14 12:04:23.632: INFO: stdout: "update-demo-nautilus-qtsw8 update-demo-nautilus-rkhtw " May 14 12:04:23.632: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-qtsw8 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-lc94l' May 14 12:04:23.749: INFO: stderr: "" May 14 12:04:23.749: INFO: stdout: "true" May 14 12:04:23.749: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-qtsw8 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-lc94l' May 14 12:04:23.830: INFO: stderr: "" May 14 12:04:23.830: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 14 12:04:23.830: INFO: validating pod update-demo-nautilus-qtsw8 May 14 12:04:23.834: INFO: got data: { "image": "nautilus.jpg" } May 14 12:04:23.834: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 14 12:04:23.834: INFO: update-demo-nautilus-qtsw8 is verified up and running May 14 12:04:23.835: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-rkhtw -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-lc94l' May 14 12:04:23.927: INFO: stderr: "" May 14 12:04:23.927: INFO: stdout: "true" May 14 12:04:23.927: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-rkhtw -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-lc94l' May 14 12:04:24.028: INFO: stderr: "" May 14 12:04:24.028: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 14 12:04:24.028: INFO: validating pod update-demo-nautilus-rkhtw May 14 12:04:24.031: INFO: got data: { "image": "nautilus.jpg" } May 14 12:04:24.031: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 14 12:04:24.031: INFO: update-demo-nautilus-rkhtw is verified up and running STEP: rolling-update to new replication controller May 14 12:04:24.033: INFO: scanned /root for discovery docs: May 14 12:04:24.033: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=e2e-tests-kubectl-lc94l' May 14 12:04:54.476: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" May 14 12:04:54.476: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n" STEP: waiting for all containers in name=update-demo pods to come up. May 14 12:04:54.476: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-lc94l' May 14 12:04:54.587: INFO: stderr: "" May 14 12:04:54.587: INFO: stdout: "update-demo-kitten-jp89h update-demo-kitten-rmshv " May 14 12:04:54.587: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-jp89h -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-lc94l' May 14 12:04:54.679: INFO: stderr: "" May 14 12:04:54.679: INFO: stdout: "true" May 14 12:04:54.679: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-jp89h -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-lc94l' May 14 12:04:54.771: INFO: stderr: "" May 14 12:04:54.771: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" May 14 12:04:54.771: INFO: validating pod update-demo-kitten-jp89h May 14 12:04:54.781: INFO: got data: { "image": "kitten.jpg" } May 14 12:04:54.781: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . May 14 12:04:54.781: INFO: update-demo-kitten-jp89h is verified up and running May 14 12:04:54.781: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-rmshv -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-lc94l' May 14 12:04:54.874: INFO: stderr: "" May 14 12:04:54.874: INFO: stdout: "true" May 14 12:04:54.874: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-rmshv -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-lc94l' May 14 12:04:54.970: INFO: stderr: "" May 14 12:04:54.970: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" May 14 12:04:54.970: INFO: validating pod update-demo-kitten-rmshv May 14 12:04:54.972: INFO: got data: { "image": "kitten.jpg" } May 14 12:04:54.973: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . May 14 12:04:54.973: INFO: update-demo-kitten-rmshv is verified up and running [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 14 12:04:54.973: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-lc94l" for this suite. May 14 12:05:39.217: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 12:05:40.933: INFO: namespace: e2e-tests-kubectl-lc94l, resource: bindings, ignored listing per whitelist May 14 12:05:40.940: INFO: namespace e2e-tests-kubectl-lc94l deletion completed in 45.965765746s • [SLOW TEST:112.987 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 14 12:05:40.941: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename prestop STEP: Waiting for a default service account to be provisioned in namespace [It] should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating server pod server in namespace e2e-tests-prestop-px8xz STEP: Waiting for pods to come up. STEP: Creating tester pod tester in namespace e2e-tests-prestop-px8xz STEP: Deleting pre-stop pod May 14 12:06:09.428: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": { "prestop": 1 }, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } STEP: Deleting the server pod [AfterEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 14 12:06:09.433: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-prestop-px8xz" for this suite. May 14 12:06:48.140: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 12:06:48.192: INFO: namespace: e2e-tests-prestop-px8xz, resource: bindings, ignored listing per whitelist May 14 12:06:48.192: INFO: namespace e2e-tests-prestop-px8xz deletion completed in 38.383840895s • [SLOW TEST:67.251 seconds] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 14 12:06:48.192: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating secret e2e-tests-secrets-8mqwl/secret-test-636c2f31-95db-11ea-9b22-0242ac110018 STEP: Creating a pod to test consume secrets May 14 12:06:48.999: INFO: Waiting up to 5m0s for pod "pod-configmaps-63774286-95db-11ea-9b22-0242ac110018" in namespace "e2e-tests-secrets-8mqwl" to be "success or failure" May 14 12:06:49.195: INFO: Pod "pod-configmaps-63774286-95db-11ea-9b22-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 195.774111ms May 14 12:06:51.255: INFO: Pod "pod-configmaps-63774286-95db-11ea-9b22-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.256107415s May 14 12:06:53.566: INFO: Pod "pod-configmaps-63774286-95db-11ea-9b22-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 4.567387265s May 14 12:06:55.570: INFO: Pod "pod-configmaps-63774286-95db-11ea-9b22-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.571064497s STEP: Saw pod success May 14 12:06:55.570: INFO: Pod "pod-configmaps-63774286-95db-11ea-9b22-0242ac110018" satisfied condition "success or failure" May 14 12:06:55.572: INFO: Trying to get logs from node hunter-worker2 pod pod-configmaps-63774286-95db-11ea-9b22-0242ac110018 container env-test: STEP: delete the pod May 14 12:06:55.657: INFO: Waiting for pod pod-configmaps-63774286-95db-11ea-9b22-0242ac110018 to disappear May 14 12:06:55.672: INFO: Pod pod-configmaps-63774286-95db-11ea-9b22-0242ac110018 no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 14 12:06:55.672: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-8mqwl" for this suite. May 14 12:07:01.868: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 12:07:01.889: INFO: namespace: e2e-tests-secrets-8mqwl, resource: bindings, ignored listing per whitelist May 14 12:07:01.928: INFO: namespace e2e-tests-secrets-8mqwl deletion completed in 6.138817873s • [SLOW TEST:13.736 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:32 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 14 12:07:01.928: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with secret pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod pod-subpath-test-secret-fwht STEP: Creating a pod to test atomic-volume-subpath May 14 12:07:02.069: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-fwht" in namespace "e2e-tests-subpath-nf64q" to be "success or failure" May 14 12:07:02.087: INFO: Pod "pod-subpath-test-secret-fwht": Phase="Pending", Reason="", readiness=false. Elapsed: 17.726844ms May 14 12:07:04.211: INFO: Pod "pod-subpath-test-secret-fwht": Phase="Pending", Reason="", readiness=false. Elapsed: 2.141249387s May 14 12:07:06.214: INFO: Pod "pod-subpath-test-secret-fwht": Phase="Pending", Reason="", readiness=false. Elapsed: 4.144916121s May 14 12:07:08.218: INFO: Pod "pod-subpath-test-secret-fwht": Phase="Pending", Reason="", readiness=false. Elapsed: 6.148656772s May 14 12:07:10.221: INFO: Pod "pod-subpath-test-secret-fwht": Phase="Running", Reason="", readiness=true. Elapsed: 8.151187418s May 14 12:07:12.230: INFO: Pod "pod-subpath-test-secret-fwht": Phase="Running", Reason="", readiness=false. Elapsed: 10.160952585s May 14 12:07:14.234: INFO: Pod "pod-subpath-test-secret-fwht": Phase="Running", Reason="", readiness=false. Elapsed: 12.164582301s May 14 12:07:16.238: INFO: Pod "pod-subpath-test-secret-fwht": Phase="Running", Reason="", readiness=false. Elapsed: 14.168109379s May 14 12:07:18.241: INFO: Pod "pod-subpath-test-secret-fwht": Phase="Running", Reason="", readiness=false. Elapsed: 16.171535188s May 14 12:07:20.245: INFO: Pod "pod-subpath-test-secret-fwht": Phase="Running", Reason="", readiness=false. Elapsed: 18.17515981s May 14 12:07:22.248: INFO: Pod "pod-subpath-test-secret-fwht": Phase="Running", Reason="", readiness=false. Elapsed: 20.178376259s May 14 12:07:24.251: INFO: Pod "pod-subpath-test-secret-fwht": Phase="Running", Reason="", readiness=false. Elapsed: 22.181148907s May 14 12:07:26.254: INFO: Pod "pod-subpath-test-secret-fwht": Phase="Running", Reason="", readiness=false. Elapsed: 24.184987713s May 14 12:07:29.461: INFO: Pod "pod-subpath-test-secret-fwht": Phase="Running", Reason="", readiness=false. Elapsed: 27.391542656s May 14 12:07:31.823: INFO: Pod "pod-subpath-test-secret-fwht": Phase="Running", Reason="", readiness=false. Elapsed: 29.753458159s May 14 12:07:33.826: INFO: Pod "pod-subpath-test-secret-fwht": Phase="Running", Reason="", readiness=false. Elapsed: 31.757030257s May 14 12:07:36.381: INFO: Pod "pod-subpath-test-secret-fwht": Phase="Running", Reason="", readiness=false. Elapsed: 34.311598619s May 14 12:07:38.627: INFO: Pod "pod-subpath-test-secret-fwht": Phase="Running", Reason="", readiness=false. Elapsed: 36.557739398s May 14 12:07:40.632: INFO: Pod "pod-subpath-test-secret-fwht": Phase="Running", Reason="", readiness=false. Elapsed: 38.562259458s May 14 12:07:43.088: INFO: Pod "pod-subpath-test-secret-fwht": Phase="Running", Reason="", readiness=false. Elapsed: 41.018632842s May 14 12:07:45.091: INFO: Pod "pod-subpath-test-secret-fwht": Phase="Succeeded", Reason="", readiness=false. Elapsed: 43.021935337s STEP: Saw pod success May 14 12:07:45.091: INFO: Pod "pod-subpath-test-secret-fwht" satisfied condition "success or failure" May 14 12:07:45.624: INFO: Trying to get logs from node hunter-worker2 pod pod-subpath-test-secret-fwht container test-container-subpath-secret-fwht: STEP: delete the pod May 14 12:07:46.828: INFO: Waiting for pod pod-subpath-test-secret-fwht to disappear May 14 12:07:47.507: INFO: Pod pod-subpath-test-secret-fwht no longer exists STEP: Deleting pod pod-subpath-test-secret-fwht May 14 12:07:47.507: INFO: Deleting pod "pod-subpath-test-secret-fwht" in namespace "e2e-tests-subpath-nf64q" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 14 12:07:47.509: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-subpath-nf64q" for this suite. May 14 12:07:56.544: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 12:07:56.561: INFO: namespace: e2e-tests-subpath-nf64q, resource: bindings, ignored listing per whitelist May 14 12:07:56.607: INFO: namespace e2e-tests-subpath-nf64q deletion completed in 8.5295963s • [SLOW TEST:54.679 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with secret pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 14 12:07:56.608: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-8wfhx May 14 12:08:20.786: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-8wfhx STEP: checking the pod's current state and verifying that restartCount is present May 14 12:08:20.788: INFO: Initial restart count of pod liveness-http is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 14 12:12:21.565: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-8wfhx" for this suite. May 14 12:12:29.774: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 12:12:29.802: INFO: namespace: e2e-tests-container-probe-8wfhx, resource: bindings, ignored listing per whitelist May 14 12:12:29.850: INFO: namespace e2e-tests-container-probe-8wfhx deletion completed in 8.232446595s • [SLOW TEST:273.242 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 14 12:12:29.850: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap e2e-tests-configmap-xtv4c/configmap-test-2f536c29-95dc-11ea-9b22-0242ac110018 STEP: Creating a pod to test consume configMaps May 14 12:12:31.921: INFO: Waiting up to 5m0s for pod "pod-configmaps-2f9e8892-95dc-11ea-9b22-0242ac110018" in namespace "e2e-tests-configmap-xtv4c" to be "success or failure" May 14 12:12:31.972: INFO: Pod "pod-configmaps-2f9e8892-95dc-11ea-9b22-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 50.72508ms May 14 12:12:34.463: INFO: Pod "pod-configmaps-2f9e8892-95dc-11ea-9b22-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.541821665s May 14 12:12:38.793: INFO: Pod "pod-configmaps-2f9e8892-95dc-11ea-9b22-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 6.87141513s May 14 12:12:41.093: INFO: Pod "pod-configmaps-2f9e8892-95dc-11ea-9b22-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 9.171603526s May 14 12:12:43.099: INFO: Pod "pod-configmaps-2f9e8892-95dc-11ea-9b22-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 11.177334889s May 14 12:12:46.308: INFO: Pod "pod-configmaps-2f9e8892-95dc-11ea-9b22-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 14.386575496s May 14 12:12:49.537: INFO: Pod "pod-configmaps-2f9e8892-95dc-11ea-9b22-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 17.616149849s May 14 12:12:51.540: INFO: Pod "pod-configmaps-2f9e8892-95dc-11ea-9b22-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 19.619117288s May 14 12:12:53.545: INFO: Pod "pod-configmaps-2f9e8892-95dc-11ea-9b22-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 21.623694031s May 14 12:12:55.550: INFO: Pod "pod-configmaps-2f9e8892-95dc-11ea-9b22-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 23.62828373s May 14 12:12:57.625: INFO: Pod "pod-configmaps-2f9e8892-95dc-11ea-9b22-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 25.703718201s May 14 12:12:59.643: INFO: Pod "pod-configmaps-2f9e8892-95dc-11ea-9b22-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 27.721608911s STEP: Saw pod success May 14 12:12:59.643: INFO: Pod "pod-configmaps-2f9e8892-95dc-11ea-9b22-0242ac110018" satisfied condition "success or failure" May 14 12:12:59.653: INFO: Trying to get logs from node hunter-worker pod pod-configmaps-2f9e8892-95dc-11ea-9b22-0242ac110018 container env-test: STEP: delete the pod May 14 12:12:59.683: INFO: Waiting for pod pod-configmaps-2f9e8892-95dc-11ea-9b22-0242ac110018 to disappear May 14 12:12:59.695: INFO: Pod pod-configmaps-2f9e8892-95dc-11ea-9b22-0242ac110018 no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 14 12:12:59.695: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-xtv4c" for this suite. May 14 12:13:05.993: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 12:13:06.024: INFO: namespace: e2e-tests-configmap-xtv4c, resource: bindings, ignored listing per whitelist May 14 12:13:06.052: INFO: namespace e2e-tests-configmap-xtv4c deletion completed in 6.353678305s • [SLOW TEST:36.202 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 14 12:13:06.053: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-upd-44d566c8-95dc-11ea-9b22-0242ac110018 STEP: Creating the pod STEP: Waiting for pod with text data STEP: Waiting for pod with binary data [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 14 12:13:13.066: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-qb5sw" for this suite. May 14 12:13:35.162: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 12:13:35.209: INFO: namespace: e2e-tests-configmap-qb5sw, resource: bindings, ignored listing per whitelist May 14 12:13:35.226: INFO: namespace e2e-tests-configmap-qb5sw deletion completed in 22.156819844s • [SLOW TEST:29.173 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run rc should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 14 12:13:35.226: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1298 [It] should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine May 14 12:13:35.448: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=e2e-tests-kubectl-5689r' May 14 12:13:57.101: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" May 14 12:13:57.101: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n" STEP: verifying the rc e2e-test-nginx-rc was created STEP: verifying the pod controlled by rc e2e-test-nginx-rc was created STEP: confirm that you can get logs from an rc May 14 12:14:01.076: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-nginx-rc-frkf8] May 14 12:14:01.076: INFO: Waiting up to 5m0s for pod "e2e-test-nginx-rc-frkf8" in namespace "e2e-tests-kubectl-5689r" to be "running and ready" May 14 12:14:01.080: INFO: Pod "e2e-test-nginx-rc-frkf8": Phase="Pending", Reason="", readiness=false. Elapsed: 3.309416ms May 14 12:14:03.249: INFO: Pod "e2e-test-nginx-rc-frkf8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.173011514s May 14 12:14:05.252: INFO: Pod "e2e-test-nginx-rc-frkf8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.176098522s May 14 12:14:07.494: INFO: Pod "e2e-test-nginx-rc-frkf8": Phase="Running", Reason="", readiness=true. Elapsed: 6.417985353s May 14 12:14:07.494: INFO: Pod "e2e-test-nginx-rc-frkf8" satisfied condition "running and ready" May 14 12:14:07.494: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-nginx-rc-frkf8] May 14 12:14:07.494: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs rc/e2e-test-nginx-rc --namespace=e2e-tests-kubectl-5689r' May 14 12:14:07.884: INFO: stderr: "" May 14 12:14:07.884: INFO: stdout: "" [AfterEach] [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1303 May 14 12:14:07.884: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=e2e-tests-kubectl-5689r' May 14 12:14:08.039: INFO: stderr: "" May 14 12:14:08.039: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 14 12:14:08.039: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-5689r" for this suite. May 14 12:14:34.078: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 12:14:34.085: INFO: namespace: e2e-tests-kubectl-5689r, resource: bindings, ignored listing per whitelist May 14 12:14:35.220: INFO: namespace e2e-tests-kubectl-5689r deletion completed in 27.164164439s • [SLOW TEST:59.994 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 14 12:14:35.220: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 14 12:15:04.855: INFO: Waiting up to 5m0s for pod "client-envvars-8b2c6535-95dc-11ea-9b22-0242ac110018" in namespace "e2e-tests-pods-64plr" to be "success or failure" May 14 12:15:04.963: INFO: Pod "client-envvars-8b2c6535-95dc-11ea-9b22-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 107.693155ms May 14 12:15:07.295: INFO: Pod "client-envvars-8b2c6535-95dc-11ea-9b22-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.439921622s May 14 12:15:09.297: INFO: Pod "client-envvars-8b2c6535-95dc-11ea-9b22-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 4.442168979s May 14 12:15:11.301: INFO: Pod "client-envvars-8b2c6535-95dc-11ea-9b22-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.446052074s STEP: Saw pod success May 14 12:15:11.301: INFO: Pod "client-envvars-8b2c6535-95dc-11ea-9b22-0242ac110018" satisfied condition "success or failure" May 14 12:15:11.304: INFO: Trying to get logs from node hunter-worker pod client-envvars-8b2c6535-95dc-11ea-9b22-0242ac110018 container env3cont: STEP: delete the pod May 14 12:15:12.316: INFO: Waiting for pod client-envvars-8b2c6535-95dc-11ea-9b22-0242ac110018 to disappear May 14 12:15:12.334: INFO: Pod client-envvars-8b2c6535-95dc-11ea-9b22-0242ac110018 no longer exists [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 14 12:15:12.334: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-64plr" for this suite. May 14 12:16:20.355: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 12:16:20.477: INFO: namespace: e2e-tests-pods-64plr, resource: bindings, ignored listing per whitelist May 14 12:16:20.574: INFO: namespace e2e-tests-pods-64plr deletion completed in 1m8.233712725s • [SLOW TEST:105.354 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 14 12:16:20.574: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: starting an echo server on multiple ports STEP: creating replication controller proxy-service-htcxx in namespace e2e-tests-proxy-7q7lt I0514 12:16:21.481063 6 runners.go:184] Created replication controller with name: proxy-service-htcxx, namespace: e2e-tests-proxy-7q7lt, replica count: 1 I0514 12:16:22.531628 6 runners.go:184] proxy-service-htcxx Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0514 12:16:23.531828 6 runners.go:184] proxy-service-htcxx Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0514 12:16:24.532002 6 runners.go:184] proxy-service-htcxx Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0514 12:16:25.532223 6 runners.go:184] proxy-service-htcxx Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0514 12:16:26.532458 6 runners.go:184] proxy-service-htcxx Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0514 12:16:27.532632 6 runners.go:184] proxy-service-htcxx Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0514 12:16:28.532792 6 runners.go:184] proxy-service-htcxx Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0514 12:16:29.532977 6 runners.go:184] proxy-service-htcxx Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0514 12:16:30.533296 6 runners.go:184] proxy-service-htcxx Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0514 12:16:31.533463 6 runners.go:184] proxy-service-htcxx Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0514 12:16:32.533624 6 runners.go:184] proxy-service-htcxx Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 14 12:16:32.825: INFO: Endpoint e2e-tests-proxy-7q7lt/proxy-service-htcxx is not ready yet May 14 12:16:35.264: INFO: setup took 13.931077921s, starting test cases STEP: running 16 cases, 20 attempts per case, 320 total attempts May 14 12:16:35.350: INFO: (0) /api/v1/namespaces/e2e-tests-proxy-7q7lt/pods/proxy-service-htcxx-n5p5t:162/proxy/: bar (200; 84.827469ms) May 14 12:16:35.351: INFO: (0) /api/v1/namespaces/e2e-tests-proxy-7q7lt/pods/proxy-service-htcxx-n5p5t:1080/proxy/: >> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the rs STEP: Gathering metrics W0514 12:17:57.316786 6 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 14 12:17:57.316: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 14 12:17:57.316: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-l5qwp" for this suite. May 14 12:18:09.408: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 12:18:09.440: INFO: namespace: e2e-tests-gc-l5qwp, resource: bindings, ignored listing per whitelist May 14 12:18:09.643: INFO: namespace e2e-tests-gc-l5qwp deletion completed in 12.324716847s • [SLOW TEST:43.467 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 14 12:18:09.644: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 14 12:18:15.962: INFO: (0) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 1.773983351s) May 14 12:18:18.539: INFO: (1) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 2.576913543s) May 14 12:18:19.093: INFO: (2) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 553.730679ms) May 14 12:18:20.896: INFO: (3) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 1.803229936s) May 14 12:18:23.122: INFO: (4) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 2.226218794s) May 14 12:18:24.198: INFO: (5) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 1.075968063s) May 14 12:18:24.264: INFO: (6) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 65.729846ms) May 14 12:18:24.267: INFO: (7) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 2.687967ms) May 14 12:18:24.890: INFO: (8) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 622.575272ms) May 14 12:18:24.892: INFO: (9) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 2.760932ms) May 14 12:18:24.896: INFO: (10) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 3.488214ms) May 14 12:18:24.899: INFO: (11) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 2.808865ms) May 14 12:18:24.901: INFO: (12) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 2.358704ms) May 14 12:18:24.904: INFO: (13) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 2.548888ms) May 14 12:18:24.906: INFO: (14) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 2.441344ms) May 14 12:18:24.908: INFO: (15) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 2.189732ms) May 14 12:18:24.911: INFO: (16) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 2.448108ms) May 14 12:18:24.913: INFO: (17) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 2.58892ms) May 14 12:18:24.916: INFO: (18) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 2.280182ms) May 14 12:18:24.918: INFO: (19) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 2.255368ms) [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 14 12:18:24.918: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-proxy-kkncl" for this suite. May 14 12:18:34.037: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 12:18:35.574: INFO: namespace: e2e-tests-proxy-kkncl, resource: bindings, ignored listing per whitelist May 14 12:18:35.619: INFO: namespace e2e-tests-proxy-kkncl deletion completed in 10.698124997s • [SLOW TEST:25.975 seconds] [sig-network] Proxy /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:56 should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 14 12:18:35.619: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 14 12:18:36.670: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 14 12:19:43.955: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-p6895" for this suite. May 14 12:20:51.230: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 12:20:51.252: INFO: namespace: e2e-tests-pods-p6895, resource: bindings, ignored listing per whitelist May 14 12:20:51.296: INFO: namespace e2e-tests-pods-p6895 deletion completed in 1m7.338207647s • [SLOW TEST:135.677 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-network] Services should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 14 12:20:51.296: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:85 [It] should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating service endpoint-test2 in namespace e2e-tests-services-b4xss STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-b4xss to expose endpoints map[] May 14 12:20:52.398: INFO: Get endpoints failed (233.830935ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found May 14 12:20:53.400: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-b4xss exposes endpoints map[] (1.236139677s elapsed) STEP: Creating pod pod1 in namespace e2e-tests-services-b4xss STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-b4xss to expose endpoints map[pod1:[80]] May 14 12:20:58.437: INFO: Unexpected endpoints: found map[], expected map[pod1:[80]] (5.032343439s elapsed, will retry) May 14 12:21:05.794: INFO: Unexpected endpoints: found map[], expected map[pod1:[80]] (12.389366709s elapsed, will retry) May 14 12:21:08.925: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-b4xss exposes endpoints map[pod1:[80]] (15.521313315s elapsed) STEP: Creating pod pod2 in namespace e2e-tests-services-b4xss STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-b4xss to expose endpoints map[pod1:[80] pod2:[80]] May 14 12:21:13.411: INFO: Unexpected endpoints: found map[5aed41a1-95dd-11ea-99e8-0242ac110002:[80]], expected map[pod1:[80] pod2:[80]] (4.483022518s elapsed, will retry) May 14 12:21:16.432: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-b4xss exposes endpoints map[pod1:[80] pod2:[80]] (7.503416139s elapsed) STEP: Deleting pod pod1 in namespace e2e-tests-services-b4xss STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-b4xss to expose endpoints map[pod2:[80]] May 14 12:21:17.672: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-b4xss exposes endpoints map[pod2:[80]] (1.237150724s elapsed) STEP: Deleting pod pod2 in namespace e2e-tests-services-b4xss STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-b4xss to expose endpoints map[] May 14 12:21:18.743: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-b4xss exposes endpoints map[] (1.067991015s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 14 12:21:18.799: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-services-b4xss" for this suite. May 14 12:21:40.824: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 12:21:40.834: INFO: namespace: e2e-tests-services-b4xss, resource: bindings, ignored listing per whitelist May 14 12:21:40.878: INFO: namespace e2e-tests-services-b4xss deletion completed in 22.061827155s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:90 • [SLOW TEST:49.582 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 14 12:21:40.879: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0644 on tmpfs May 14 12:21:41.654: INFO: Waiting up to 5m0s for pod "pod-77ae54a6-95dd-11ea-9b22-0242ac110018" in namespace "e2e-tests-emptydir-jb2vg" to be "success or failure" May 14 12:21:41.683: INFO: Pod "pod-77ae54a6-95dd-11ea-9b22-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 29.281536ms May 14 12:21:44.117: INFO: Pod "pod-77ae54a6-95dd-11ea-9b22-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.463060832s May 14 12:21:46.120: INFO: Pod "pod-77ae54a6-95dd-11ea-9b22-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 4.466132674s May 14 12:21:48.222: INFO: Pod "pod-77ae54a6-95dd-11ea-9b22-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 6.568065957s May 14 12:21:50.225: INFO: Pod "pod-77ae54a6-95dd-11ea-9b22-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.571305518s STEP: Saw pod success May 14 12:21:50.226: INFO: Pod "pod-77ae54a6-95dd-11ea-9b22-0242ac110018" satisfied condition "success or failure" May 14 12:21:50.227: INFO: Trying to get logs from node hunter-worker2 pod pod-77ae54a6-95dd-11ea-9b22-0242ac110018 container test-container: STEP: delete the pod May 14 12:21:50.750: INFO: Waiting for pod pod-77ae54a6-95dd-11ea-9b22-0242ac110018 to disappear May 14 12:21:50.871: INFO: Pod pod-77ae54a6-95dd-11ea-9b22-0242ac110018 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 14 12:21:50.871: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-jb2vg" for this suite. May 14 12:21:56.897: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 12:21:56.947: INFO: namespace: e2e-tests-emptydir-jb2vg, resource: bindings, ignored listing per whitelist May 14 12:21:56.962: INFO: namespace e2e-tests-emptydir-jb2vg deletion completed in 6.088587731s • [SLOW TEST:16.083 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 14 12:21:56.962: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Given a Pod with a 'name' label pod-adoption-release is created STEP: When a replicaset with a matching selector is created STEP: Then the orphan pod is adopted STEP: When the matched label of one of its pods change May 14 12:22:08.185: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 14 12:22:08.239: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-replicaset-ff5pg" for this suite. May 14 12:22:38.320: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 12:22:38.447: INFO: namespace: e2e-tests-replicaset-ff5pg, resource: bindings, ignored listing per whitelist May 14 12:22:38.452: INFO: namespace e2e-tests-replicaset-ff5pg deletion completed in 30.20487373s • [SLOW TEST:41.490 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 14 12:22:38.453: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: getting the auto-created API token May 14 12:22:39.110: INFO: created pod pod-service-account-defaultsa May 14 12:22:39.110: INFO: pod pod-service-account-defaultsa service account token volume mount: true May 14 12:22:39.140: INFO: created pod pod-service-account-mountsa May 14 12:22:39.140: INFO: pod pod-service-account-mountsa service account token volume mount: true May 14 12:22:39.147: INFO: created pod pod-service-account-nomountsa May 14 12:22:39.147: INFO: pod pod-service-account-nomountsa service account token volume mount: false May 14 12:22:39.195: INFO: created pod pod-service-account-defaultsa-mountspec May 14 12:22:39.195: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true May 14 12:22:39.198: INFO: created pod pod-service-account-mountsa-mountspec May 14 12:22:39.198: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true May 14 12:22:39.227: INFO: created pod pod-service-account-nomountsa-mountspec May 14 12:22:39.227: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true May 14 12:22:39.251: INFO: created pod pod-service-account-defaultsa-nomountspec May 14 12:22:39.251: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false May 14 12:22:39.276: INFO: created pod pod-service-account-mountsa-nomountspec May 14 12:22:39.276: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false May 14 12:22:39.328: INFO: created pod pod-service-account-nomountsa-nomountspec May 14 12:22:39.328: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 14 12:22:39.328: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-svcaccounts-8x87p" for this suite. May 14 12:23:29.412: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 12:23:29.464: INFO: namespace: e2e-tests-svcaccounts-8x87p, resource: bindings, ignored listing per whitelist May 14 12:23:29.876: INFO: namespace e2e-tests-svcaccounts-8x87p deletion completed in 50.541810353s • [SLOW TEST:51.423 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:22 should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 14 12:23:29.876: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test override command May 14 12:23:38.036: INFO: Waiting up to 5m0s for pod "client-containers-bc10f524-95dd-11ea-9b22-0242ac110018" in namespace "e2e-tests-containers-l9js6" to be "success or failure" May 14 12:23:38.377: INFO: Pod "client-containers-bc10f524-95dd-11ea-9b22-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 341.263639ms May 14 12:23:40.496: INFO: Pod "client-containers-bc10f524-95dd-11ea-9b22-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.459914005s May 14 12:23:42.568: INFO: Pod "client-containers-bc10f524-95dd-11ea-9b22-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 4.532599526s May 14 12:23:44.571: INFO: Pod "client-containers-bc10f524-95dd-11ea-9b22-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 6.535596655s May 14 12:23:46.575: INFO: Pod "client-containers-bc10f524-95dd-11ea-9b22-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.539074715s STEP: Saw pod success May 14 12:23:46.575: INFO: Pod "client-containers-bc10f524-95dd-11ea-9b22-0242ac110018" satisfied condition "success or failure" May 14 12:23:46.597: INFO: Trying to get logs from node hunter-worker pod client-containers-bc10f524-95dd-11ea-9b22-0242ac110018 container test-container: STEP: delete the pod May 14 12:23:46.655: INFO: Waiting for pod client-containers-bc10f524-95dd-11ea-9b22-0242ac110018 to disappear May 14 12:23:46.826: INFO: Pod client-containers-bc10f524-95dd-11ea-9b22-0242ac110018 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 14 12:23:46.826: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-containers-l9js6" for this suite. May 14 12:23:55.842: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 12:23:55.900: INFO: namespace: e2e-tests-containers-l9js6, resource: bindings, ignored listing per whitelist May 14 12:23:55.917: INFO: namespace e2e-tests-containers-l9js6 deletion completed in 9.088563942s • [SLOW TEST:26.041 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 14 12:23:55.917: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-map-c7edb95b-95dd-11ea-9b22-0242ac110018 STEP: Creating a pod to test consume configMaps May 14 12:23:56.389: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-c7f53a09-95dd-11ea-9b22-0242ac110018" in namespace "e2e-tests-projected-m9j8g" to be "success or failure" May 14 12:23:56.392: INFO: Pod "pod-projected-configmaps-c7f53a09-95dd-11ea-9b22-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 3.227238ms May 14 12:23:58.532: INFO: Pod "pod-projected-configmaps-c7f53a09-95dd-11ea-9b22-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.143023887s May 14 12:24:01.125: INFO: Pod "pod-projected-configmaps-c7f53a09-95dd-11ea-9b22-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 4.736655815s May 14 12:24:03.128: INFO: Pod "pod-projected-configmaps-c7f53a09-95dd-11ea-9b22-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.739662471s STEP: Saw pod success May 14 12:24:03.128: INFO: Pod "pod-projected-configmaps-c7f53a09-95dd-11ea-9b22-0242ac110018" satisfied condition "success or failure" May 14 12:24:03.131: INFO: Trying to get logs from node hunter-worker2 pod pod-projected-configmaps-c7f53a09-95dd-11ea-9b22-0242ac110018 container projected-configmap-volume-test: STEP: delete the pod May 14 12:24:03.355: INFO: Waiting for pod pod-projected-configmaps-c7f53a09-95dd-11ea-9b22-0242ac110018 to disappear May 14 12:24:03.416: INFO: Pod pod-projected-configmaps-c7f53a09-95dd-11ea-9b22-0242ac110018 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 14 12:24:03.416: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-m9j8g" for this suite. May 14 12:24:10.367: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 12:24:10.479: INFO: namespace: e2e-tests-projected-m9j8g, resource: bindings, ignored listing per whitelist May 14 12:24:10.480: INFO: namespace e2e-tests-projected-m9j8g deletion completed in 7.058751944s • [SLOW TEST:14.563 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 14 12:24:10.480: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with configmap pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod pod-subpath-test-configmap-z7kr STEP: Creating a pod to test atomic-volume-subpath May 14 12:24:10.600: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-z7kr" in namespace "e2e-tests-subpath-xd2ts" to be "success or failure" May 14 12:24:10.605: INFO: Pod "pod-subpath-test-configmap-z7kr": Phase="Pending", Reason="", readiness=false. Elapsed: 4.144427ms May 14 12:24:12.608: INFO: Pod "pod-subpath-test-configmap-z7kr": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00745277s May 14 12:24:14.820: INFO: Pod "pod-subpath-test-configmap-z7kr": Phase="Pending", Reason="", readiness=false. Elapsed: 4.21924857s May 14 12:24:17.227: INFO: Pod "pod-subpath-test-configmap-z7kr": Phase="Pending", Reason="", readiness=false. Elapsed: 6.626884217s May 14 12:24:19.349: INFO: Pod "pod-subpath-test-configmap-z7kr": Phase="Pending", Reason="", readiness=false. Elapsed: 8.748644633s May 14 12:24:21.911: INFO: Pod "pod-subpath-test-configmap-z7kr": Phase="Pending", Reason="", readiness=false. Elapsed: 11.310782164s May 14 12:24:24.053: INFO: Pod "pod-subpath-test-configmap-z7kr": Phase="Pending", Reason="", readiness=false. Elapsed: 13.45270716s May 14 12:24:26.329: INFO: Pod "pod-subpath-test-configmap-z7kr": Phase="Pending", Reason="", readiness=false. Elapsed: 15.728969736s May 14 12:24:28.333: INFO: Pod "pod-subpath-test-configmap-z7kr": Phase="Pending", Reason="", readiness=false. Elapsed: 17.732703229s May 14 12:24:30.338: INFO: Pod "pod-subpath-test-configmap-z7kr": Phase="Pending", Reason="", readiness=false. Elapsed: 19.737169052s May 14 12:24:32.344: INFO: Pod "pod-subpath-test-configmap-z7kr": Phase="Running", Reason="", readiness=true. Elapsed: 21.743656095s May 14 12:24:34.693: INFO: Pod "pod-subpath-test-configmap-z7kr": Phase="Running", Reason="", readiness=false. Elapsed: 24.092469469s May 14 12:24:36.878: INFO: Pod "pod-subpath-test-configmap-z7kr": Phase="Running", Reason="", readiness=false. Elapsed: 26.277490037s May 14 12:24:39.823: INFO: Pod "pod-subpath-test-configmap-z7kr": Phase="Running", Reason="", readiness=false. Elapsed: 29.222201474s May 14 12:24:41.826: INFO: Pod "pod-subpath-test-configmap-z7kr": Phase="Running", Reason="", readiness=false. Elapsed: 31.225910028s May 14 12:24:44.102: INFO: Pod "pod-subpath-test-configmap-z7kr": Phase="Running", Reason="", readiness=false. Elapsed: 33.501502276s May 14 12:24:47.312: INFO: Pod "pod-subpath-test-configmap-z7kr": Phase="Running", Reason="", readiness=false. Elapsed: 36.711939885s May 14 12:24:49.315: INFO: Pod "pod-subpath-test-configmap-z7kr": Phase="Running", Reason="", readiness=false. Elapsed: 38.715050752s May 14 12:24:54.001: INFO: Pod "pod-subpath-test-configmap-z7kr": Phase="Running", Reason="", readiness=false. Elapsed: 43.400120839s May 14 12:24:57.313: INFO: Pod "pod-subpath-test-configmap-z7kr": Phase="Running", Reason="", readiness=false. Elapsed: 46.71251249s May 14 12:24:59.316: INFO: Pod "pod-subpath-test-configmap-z7kr": Phase="Running", Reason="", readiness=false. Elapsed: 48.715758987s May 14 12:25:02.708: INFO: Pod "pod-subpath-test-configmap-z7kr": Phase="Running", Reason="", readiness=false. Elapsed: 52.107855074s May 14 12:25:04.802: INFO: Pod "pod-subpath-test-configmap-z7kr": Phase="Running", Reason="", readiness=false. Elapsed: 54.201210908s May 14 12:25:07.192: INFO: Pod "pod-subpath-test-configmap-z7kr": Phase="Running", Reason="", readiness=false. Elapsed: 56.591959378s May 14 12:25:09.349: INFO: Pod "pod-subpath-test-configmap-z7kr": Phase="Running", Reason="", readiness=false. Elapsed: 58.748730921s May 14 12:25:13.103: INFO: Pod "pod-subpath-test-configmap-z7kr": Phase="Running", Reason="", readiness=false. Elapsed: 1m2.502950694s May 14 12:25:16.174: INFO: Pod "pod-subpath-test-configmap-z7kr": Phase="Running", Reason="", readiness=false. Elapsed: 1m5.573548069s May 14 12:25:19.465: INFO: Pod "pod-subpath-test-configmap-z7kr": Phase="Running", Reason="", readiness=false. Elapsed: 1m8.864969961s May 14 12:25:22.909: INFO: Pod "pod-subpath-test-configmap-z7kr": Phase="Running", Reason="", readiness=false. Elapsed: 1m12.308647707s May 14 12:25:27.263: INFO: Pod "pod-subpath-test-configmap-z7kr": Phase="Running", Reason="", readiness=false. Elapsed: 1m16.662716538s May 14 12:25:29.266: INFO: Pod "pod-subpath-test-configmap-z7kr": Phase="Running", Reason="", readiness=false. Elapsed: 1m18.665592299s May 14 12:25:31.269: INFO: Pod "pod-subpath-test-configmap-z7kr": Phase="Running", Reason="", readiness=false. Elapsed: 1m20.668452244s May 14 12:25:34.187: INFO: Pod "pod-subpath-test-configmap-z7kr": Phase="Running", Reason="", readiness=false. Elapsed: 1m23.58611388s May 14 12:25:37.222: INFO: Pod "pod-subpath-test-configmap-z7kr": Phase="Running", Reason="", readiness=false. Elapsed: 1m26.621184247s May 14 12:25:39.225: INFO: Pod "pod-subpath-test-configmap-z7kr": Phase="Running", Reason="", readiness=false. Elapsed: 1m28.624566553s May 14 12:25:41.497: INFO: Pod "pod-subpath-test-configmap-z7kr": Phase="Running", Reason="", readiness=false. Elapsed: 1m30.89680574s May 14 12:25:44.151: INFO: Pod "pod-subpath-test-configmap-z7kr": Phase="Running", Reason="", readiness=false. Elapsed: 1m33.550858413s May 14 12:25:48.361: INFO: Pod "pod-subpath-test-configmap-z7kr": Phase="Running", Reason="", readiness=false. Elapsed: 1m37.760845288s May 14 12:25:51.175: INFO: Pod "pod-subpath-test-configmap-z7kr": Phase="Running", Reason="", readiness=false. Elapsed: 1m40.574290184s May 14 12:25:53.179: INFO: Pod "pod-subpath-test-configmap-z7kr": Phase="Running", Reason="", readiness=false. Elapsed: 1m42.578820108s May 14 12:25:55.184: INFO: Pod "pod-subpath-test-configmap-z7kr": Phase="Running", Reason="", readiness=false. Elapsed: 1m44.583696157s May 14 12:25:57.187: INFO: Pod "pod-subpath-test-configmap-z7kr": Phase="Running", Reason="", readiness=false. Elapsed: 1m46.58613462s May 14 12:25:59.270: INFO: Pod "pod-subpath-test-configmap-z7kr": Phase="Running", Reason="", readiness=false. Elapsed: 1m48.669589187s May 14 12:26:01.273: INFO: Pod "pod-subpath-test-configmap-z7kr": Phase="Running", Reason="", readiness=false. Elapsed: 1m50.672578991s May 14 12:26:03.275: INFO: Pod "pod-subpath-test-configmap-z7kr": Phase="Running", Reason="", readiness=false. Elapsed: 1m52.674869296s May 14 12:26:05.279: INFO: Pod "pod-subpath-test-configmap-z7kr": Phase="Running", Reason="", readiness=false. Elapsed: 1m54.678605674s May 14 12:26:07.284: INFO: Pod "pod-subpath-test-configmap-z7kr": Phase="Running", Reason="", readiness=false. Elapsed: 1m56.683346109s May 14 12:26:09.288: INFO: Pod "pod-subpath-test-configmap-z7kr": Phase="Running", Reason="", readiness=false. Elapsed: 1m58.687636732s May 14 12:26:11.292: INFO: Pod "pod-subpath-test-configmap-z7kr": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2m0.691168022s STEP: Saw pod success May 14 12:26:11.292: INFO: Pod "pod-subpath-test-configmap-z7kr" satisfied condition "success or failure" May 14 12:26:11.294: INFO: Trying to get logs from node hunter-worker2 pod pod-subpath-test-configmap-z7kr container test-container-subpath-configmap-z7kr: STEP: delete the pod May 14 12:26:13.144: INFO: Waiting for pod pod-subpath-test-configmap-z7kr to disappear May 14 12:26:13.875: INFO: Pod pod-subpath-test-configmap-z7kr no longer exists STEP: Deleting pod pod-subpath-test-configmap-z7kr May 14 12:26:13.875: INFO: Deleting pod "pod-subpath-test-configmap-z7kr" in namespace "e2e-tests-subpath-xd2ts" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 14 12:26:13.878: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-subpath-xd2ts" for this suite. May 14 12:26:24.828: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 12:26:25.774: INFO: namespace: e2e-tests-subpath-xd2ts, resource: bindings, ignored listing per whitelist May 14 12:26:26.005: INFO: namespace e2e-tests-subpath-xd2ts deletion completed in 12.055907802s • [SLOW TEST:135.525 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with configmap pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 14 12:26:26.006: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-map-21479311-95de-11ea-9b22-0242ac110018 STEP: Creating a pod to test consume secrets May 14 12:26:26.222: INFO: Waiting up to 5m0s for pod "pod-secrets-214b9a02-95de-11ea-9b22-0242ac110018" in namespace "e2e-tests-secrets-8zlbz" to be "success or failure" May 14 12:26:26.232: INFO: Pod "pod-secrets-214b9a02-95de-11ea-9b22-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 10.283651ms May 14 12:26:28.258: INFO: Pod "pod-secrets-214b9a02-95de-11ea-9b22-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.036187136s May 14 12:26:30.261: INFO: Pod "pod-secrets-214b9a02-95de-11ea-9b22-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 4.039260425s May 14 12:26:32.357: INFO: Pod "pod-secrets-214b9a02-95de-11ea-9b22-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 6.1347469s May 14 12:26:34.839: INFO: Pod "pod-secrets-214b9a02-95de-11ea-9b22-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 8.617080233s May 14 12:26:37.470: INFO: Pod "pod-secrets-214b9a02-95de-11ea-9b22-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 11.247964827s May 14 12:26:39.474: INFO: Pod "pod-secrets-214b9a02-95de-11ea-9b22-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 13.252384438s May 14 12:26:41.478: INFO: Pod "pod-secrets-214b9a02-95de-11ea-9b22-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 15.255641763s May 14 12:26:43.481: INFO: Pod "pod-secrets-214b9a02-95de-11ea-9b22-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 17.258655782s May 14 12:26:45.535: INFO: Pod "pod-secrets-214b9a02-95de-11ea-9b22-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 19.312593567s May 14 12:26:47.871: INFO: Pod "pod-secrets-214b9a02-95de-11ea-9b22-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 21.649170305s May 14 12:26:52.972: INFO: Pod "pod-secrets-214b9a02-95de-11ea-9b22-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 26.750346001s May 14 12:26:55.324: INFO: Pod "pod-secrets-214b9a02-95de-11ea-9b22-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 29.102319301s May 14 12:26:58.554: INFO: Pod "pod-secrets-214b9a02-95de-11ea-9b22-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 32.331406763s May 14 12:27:00.795: INFO: Pod "pod-secrets-214b9a02-95de-11ea-9b22-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 34.572445946s May 14 12:27:03.775: INFO: Pod "pod-secrets-214b9a02-95de-11ea-9b22-0242ac110018": Phase="Running", Reason="", readiness=true. Elapsed: 37.552970355s May 14 12:27:06.032: INFO: Pod "pod-secrets-214b9a02-95de-11ea-9b22-0242ac110018": Phase="Running", Reason="", readiness=true. Elapsed: 39.809488623s May 14 12:27:08.200: INFO: Pod "pod-secrets-214b9a02-95de-11ea-9b22-0242ac110018": Phase="Running", Reason="", readiness=true. Elapsed: 41.977507945s May 14 12:27:11.788: INFO: Pod "pod-secrets-214b9a02-95de-11ea-9b22-0242ac110018": Phase="Running", Reason="", readiness=true. Elapsed: 45.566121819s May 14 12:27:13.793: INFO: Pod "pod-secrets-214b9a02-95de-11ea-9b22-0242ac110018": Phase="Running", Reason="", readiness=true. Elapsed: 47.570504413s May 14 12:27:15.797: INFO: Pod "pod-secrets-214b9a02-95de-11ea-9b22-0242ac110018": Phase="Running", Reason="", readiness=true. Elapsed: 49.574739542s May 14 12:27:18.698: INFO: Pod "pod-secrets-214b9a02-95de-11ea-9b22-0242ac110018": Phase="Running", Reason="", readiness=true. Elapsed: 52.475545912s May 14 12:27:20.701: INFO: Pod "pod-secrets-214b9a02-95de-11ea-9b22-0242ac110018": Phase="Running", Reason="", readiness=true. Elapsed: 54.479025643s May 14 12:27:22.705: INFO: Pod "pod-secrets-214b9a02-95de-11ea-9b22-0242ac110018": Phase="Running", Reason="", readiness=true. Elapsed: 56.482880938s May 14 12:27:24.932: INFO: Pod "pod-secrets-214b9a02-95de-11ea-9b22-0242ac110018": Phase="Running", Reason="", readiness=true. Elapsed: 58.709482488s May 14 12:27:26.935: INFO: Pod "pod-secrets-214b9a02-95de-11ea-9b22-0242ac110018": Phase="Running", Reason="", readiness=true. Elapsed: 1m0.713145455s May 14 12:27:30.489: INFO: Pod "pod-secrets-214b9a02-95de-11ea-9b22-0242ac110018": Phase="Running", Reason="", readiness=true. Elapsed: 1m4.266942088s May 14 12:27:32.637: INFO: Pod "pod-secrets-214b9a02-95de-11ea-9b22-0242ac110018": Phase="Running", Reason="", readiness=true. Elapsed: 1m6.415358202s May 14 12:27:34.655: INFO: Pod "pod-secrets-214b9a02-95de-11ea-9b22-0242ac110018": Phase="Running", Reason="", readiness=true. Elapsed: 1m8.432445563s May 14 12:27:38.213: INFO: Pod "pod-secrets-214b9a02-95de-11ea-9b22-0242ac110018": Phase="Running", Reason="", readiness=true. Elapsed: 1m11.991107871s May 14 12:27:41.637: INFO: Pod "pod-secrets-214b9a02-95de-11ea-9b22-0242ac110018": Phase="Running", Reason="", readiness=true. Elapsed: 1m15.414987304s May 14 12:27:43.640: INFO: Pod "pod-secrets-214b9a02-95de-11ea-9b22-0242ac110018": Phase="Running", Reason="", readiness=true. Elapsed: 1m17.417491422s May 14 12:27:45.643: INFO: Pod "pod-secrets-214b9a02-95de-11ea-9b22-0242ac110018": Phase="Running", Reason="", readiness=true. Elapsed: 1m19.421009351s May 14 12:27:47.646: INFO: Pod "pod-secrets-214b9a02-95de-11ea-9b22-0242ac110018": Phase="Running", Reason="", readiness=true. Elapsed: 1m21.424134555s May 14 12:27:49.912: INFO: Pod "pod-secrets-214b9a02-95de-11ea-9b22-0242ac110018": Phase="Running", Reason="", readiness=true. Elapsed: 1m23.690008292s May 14 12:27:52.620: INFO: Pod "pod-secrets-214b9a02-95de-11ea-9b22-0242ac110018": Phase="Running", Reason="", readiness=true. Elapsed: 1m26.398211178s May 14 12:27:54.623: INFO: Pod "pod-secrets-214b9a02-95de-11ea-9b22-0242ac110018": Phase="Running", Reason="", readiness=true. Elapsed: 1m28.401090133s May 14 12:27:57.562: INFO: Pod "pod-secrets-214b9a02-95de-11ea-9b22-0242ac110018": Phase="Running", Reason="", readiness=true. Elapsed: 1m31.340374777s May 14 12:28:01.405: INFO: Pod "pod-secrets-214b9a02-95de-11ea-9b22-0242ac110018": Phase="Running", Reason="", readiness=true. Elapsed: 1m35.183313452s May 14 12:28:03.408: INFO: Pod "pod-secrets-214b9a02-95de-11ea-9b22-0242ac110018": Phase="Running", Reason="", readiness=true. Elapsed: 1m37.186305792s May 14 12:28:05.412: INFO: Pod "pod-secrets-214b9a02-95de-11ea-9b22-0242ac110018": Phase="Running", Reason="", readiness=true. Elapsed: 1m39.19016376s May 14 12:28:07.416: INFO: Pod "pod-secrets-214b9a02-95de-11ea-9b22-0242ac110018": Phase="Running", Reason="", readiness=true. Elapsed: 1m41.193531551s May 14 12:28:09.888: INFO: Pod "pod-secrets-214b9a02-95de-11ea-9b22-0242ac110018": Phase="Running", Reason="", readiness=true. Elapsed: 1m43.665596329s May 14 12:28:11.891: INFO: Pod "pod-secrets-214b9a02-95de-11ea-9b22-0242ac110018": Phase="Running", Reason="", readiness=true. Elapsed: 1m45.668671608s May 14 12:28:15.171: INFO: Pod "pod-secrets-214b9a02-95de-11ea-9b22-0242ac110018": Phase="Running", Reason="", readiness=true. Elapsed: 1m48.94848288s May 14 12:28:17.691: INFO: Pod "pod-secrets-214b9a02-95de-11ea-9b22-0242ac110018": Phase="Running", Reason="", readiness=true. Elapsed: 1m51.469024295s May 14 12:28:20.150: INFO: Pod "pod-secrets-214b9a02-95de-11ea-9b22-0242ac110018": Phase="Running", Reason="", readiness=true. Elapsed: 1m53.928285987s May 14 12:28:22.756: INFO: Pod "pod-secrets-214b9a02-95de-11ea-9b22-0242ac110018": Phase="Running", Reason="", readiness=true. Elapsed: 1m56.534369782s May 14 12:28:24.759: INFO: Pod "pod-secrets-214b9a02-95de-11ea-9b22-0242ac110018": Phase="Running", Reason="", readiness=true. Elapsed: 1m58.536799659s May 14 12:28:27.074: INFO: Pod "pod-secrets-214b9a02-95de-11ea-9b22-0242ac110018": Phase="Running", Reason="", readiness=true. Elapsed: 2m0.852119238s May 14 12:28:29.077: INFO: Pod "pod-secrets-214b9a02-95de-11ea-9b22-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2m2.854818927s STEP: Saw pod success May 14 12:28:29.077: INFO: Pod "pod-secrets-214b9a02-95de-11ea-9b22-0242ac110018" satisfied condition "success or failure" May 14 12:28:29.079: INFO: Trying to get logs from node hunter-worker pod pod-secrets-214b9a02-95de-11ea-9b22-0242ac110018 container secret-volume-test: STEP: delete the pod May 14 12:28:29.176: INFO: Waiting for pod pod-secrets-214b9a02-95de-11ea-9b22-0242ac110018 to disappear May 14 12:28:29.180: INFO: Pod pod-secrets-214b9a02-95de-11ea-9b22-0242ac110018 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 14 12:28:29.180: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-8zlbz" for this suite. May 14 12:28:35.289: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 12:28:35.316: INFO: namespace: e2e-tests-secrets-8zlbz, resource: bindings, ignored listing per whitelist May 14 12:28:35.359: INFO: namespace e2e-tests-secrets-8zlbz deletion completed in 6.175171954s • [SLOW TEST:129.353 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 14 12:28:35.359: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the rc1 STEP: create the rc2 STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well STEP: delete the rc simpletest-rc-to-be-deleted STEP: wait for the rc to be deleted May 14 12:28:52.868: INFO: 10 pods remaining May 14 12:28:52.868: INFO: 5 pods has nil DeletionTimestamp May 14 12:28:52.868: INFO: May 14 12:28:56.314: INFO: 5 pods remaining May 14 12:28:56.314: INFO: 5 pods has nil DeletionTimestamp May 14 12:28:56.314: INFO: STEP: Gathering metrics W0514 12:29:00.608039 6 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 14 12:29:00.608: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 14 12:29:00.608: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-sj2m4" for this suite. May 14 12:29:17.835: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 12:29:17.845: INFO: namespace: e2e-tests-gc-sj2m4, resource: bindings, ignored listing per whitelist May 14 12:29:17.893: INFO: namespace e2e-tests-gc-sj2m4 deletion completed in 17.21856793s • [SLOW TEST:42.534 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 14 12:29:17.893: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 STEP: Collecting events from namespace "e2e-tests-kubelet-test-8fvxn". STEP: Found 2 events. May 14 12:30:19.501: INFO: At 2020-05-14 12:29:20 +0000 UTC - event for bin-false87d7c5bf-95de-11ea-9b22-0242ac110018: {default-scheduler } Scheduled: Successfully assigned e2e-tests-kubelet-test-8fvxn/bin-false87d7c5bf-95de-11ea-9b22-0242ac110018 to hunter-worker May 14 12:30:19.501: INFO: At 2020-05-14 12:29:23 +0000 UTC - event for bin-false87d7c5bf-95de-11ea-9b22-0242ac110018: {kubelet hunter-worker} Pulled: Container image "docker.io/library/busybox:1.29" already present on machine May 14 12:30:19.511: INFO: POD NODE PHASE GRACE CONDITIONS May 14 12:30:19.511: INFO: bin-false87d7c5bf-95de-11ea-9b22-0242ac110018 hunter-worker Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 12:29:20 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-14 12:29:20 +0000 UTC ContainersNotReady containers with unready status: [bin-false87d7c5bf-95de-11ea-9b22-0242ac110018]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-14 12:29:20 +0000 UTC ContainersNotReady containers with unready status: [bin-false87d7c5bf-95de-11ea-9b22-0242ac110018]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 12:29:19 +0000 UTC }] May 14 12:30:19.511: INFO: coredns-54ff9cd656-4h7lb hunter-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 18:23:32 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 18:23:39 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 18:23:39 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 18:23:32 +0000 UTC }] May 14 12:30:19.511: INFO: coredns-54ff9cd656-8vrkk hunter-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 18:23:32 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 18:23:39 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 18:23:39 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 18:23:32 +0000 UTC }] May 14 12:30:19.511: INFO: etcd-hunter-control-plane hunter-control-plane Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 18:22:36 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 18:22:44 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 18:22:44 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 18:22:36 +0000 UTC }] May 14 12:30:19.511: INFO: kindnet-54h7m hunter-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 18:23:12 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 18:23:29 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 18:23:29 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 18:23:11 +0000 UTC }] May 14 12:30:19.511: INFO: kindnet-l2xm6 hunter-control-plane Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 18:23:08 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 18:23:26 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 18:23:26 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 18:23:08 +0000 UTC }] May 14 12:30:19.511: INFO: kindnet-mtqrs hunter-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 18:23:12 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 18:23:29 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 18:23:29 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 18:23:11 +0000 UTC }] May 14 12:30:19.511: INFO: kube-apiserver-hunter-control-plane hunter-control-plane Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 18:22:36 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 18:22:45 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 18:22:45 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 18:22:36 +0000 UTC }] May 14 12:30:19.511: INFO: kube-controller-manager-hunter-control-plane hunter-control-plane Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 18:22:36 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 18:22:44 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 18:22:44 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 18:22:36 +0000 UTC }] May 14 12:30:19.511: INFO: kube-proxy-mmppc hunter-control-plane Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 18:23:08 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 18:23:12 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 18:23:12 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 18:23:08 +0000 UTC }] May 14 12:30:19.511: INFO: kube-proxy-s52ll hunter-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 18:23:12 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 18:23:27 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 18:23:27 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 18:23:11 +0000 UTC }] May 14 12:30:19.511: INFO: kube-proxy-szbng hunter-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 18:23:11 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 18:23:27 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 18:23:27 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 18:23:11 +0000 UTC }] May 14 12:30:19.511: INFO: kube-scheduler-hunter-control-plane hunter-control-plane Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 18:22:36 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 14:51:22 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 14:51:22 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 18:22:36 +0000 UTC }] May 14 12:30:19.511: INFO: local-path-provisioner-77cfdd744c-q47vg hunter-control-plane Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 18:23:41 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 14:51:29 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-01 14:51:29 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-15 18:23:41 +0000 UTC }] May 14 12:30:19.511: INFO: May 14 12:30:21.083: INFO: Logging node info for node hunter-control-plane May 14 12:30:21.166: INFO: Node Info: &Node{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:hunter-control-plane,GenerateName:,Namespace:,SelfLink:/api/v1/nodes/hunter-control-plane,UID:faa448b1-66e9-11ea-99e8-0242ac110002,ResourceVersion:10532826,Generation:0,CreationTimestamp:2020-03-15 18:22:50 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{beta.kubernetes.io/arch: amd64,beta.kubernetes.io/os: linux,kubernetes.io/hostname: hunter-control-plane,node-role.kubernetes.io/master: ,},Annotations:map[string]string{kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock,node.alpha.kubernetes.io/ttl: 0,volumes.kubernetes.io/controller-managed-attach-detach: true,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUse_ExternalID:,ProviderID:,Unschedulable:false,Taints:[{node-role.kubernetes.io/master NoSchedule }],ConfigSource:nil,},Status:NodeStatus{Capacity:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922129408 0} {} 131759892Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922129408 0} {} 131759892Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[{MemoryPressure False 2020-05-14 12:30:16 +0000 UTC 2020-03-15 18:22:49 +0000 UTC KubeletHasSufficientMemory kubelet has sufficient memory available} {DiskPressure False 2020-05-14 12:30:16 +0000 UTC 2020-03-15 18:22:49 +0000 UTC KubeletHasNoDiskPressure kubelet has no disk pressure} {PIDPressure False 2020-05-14 12:30:16 +0000 UTC 2020-03-15 18:22:49 +0000 UTC KubeletHasSufficientPID kubelet has sufficient PID available} {Ready True 2020-05-14 12:30:16 +0000 UTC 2020-03-15 18:23:41 +0000 UTC KubeletReady kubelet is posting ready status}],Addresses:[{InternalIP 172.17.0.2} {Hostname hunter-control-plane}],DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:3c4716968dac483293a23c2100ad64a5,SystemUUID:683417f7-64ca-431d-b8ac-22e73b26255e,BootID:ca2aa731-f890-4956-92a1-ff8c7560d571,KernelVersion:4.15.0-88-generic,OSImage:Ubuntu 19.10,ContainerRuntimeVersion:containerd://1.3.2,KubeletVersion:v1.13.12,KubeProxyVersion:v1.13.12,OperatingSystem:linux,Architecture:amd64,},Images:[{[k8s.gcr.io/etcd:3.2.24] 219889590} {[k8s.gcr.io/kube-apiserver:v1.13.12] 182535474} {[k8s.gcr.io/kube-controller-manager:v1.13.12] 147799876} {[docker.io/kindest/kindnetd:0.5.4] 113207016} {[k8s.gcr.io/kube-proxy:v1.13.12] 82073262} {[k8s.gcr.io/kube-scheduler:v1.13.12] 81117489} {[k8s.gcr.io/debian-base:v2.0.0] 53884301} {[k8s.gcr.io/coredns:1.2.6] 40280546} {[docker.io/rancher/local-path-provisioner:v0.0.11] 36513375} {[k8s.gcr.io/pause:3.1] 746479}],VolumesInUse:[],VolumesAttached:[],Config:nil,},} May 14 12:30:21.166: INFO: Logging kubelet events for node hunter-control-plane May 14 12:30:21.168: INFO: Logging pods the kubelet thinks is on node hunter-control-plane May 14 12:30:21.175: INFO: etcd-hunter-control-plane started at (0+0 container statuses recorded) May 14 12:30:21.175: INFO: kube-proxy-mmppc started at 2020-03-15 18:23:08 +0000 UTC (0+1 container statuses recorded) May 14 12:30:21.175: INFO: Container kube-proxy ready: true, restart count 0 May 14 12:30:21.175: INFO: kindnet-l2xm6 started at 2020-03-15 18:23:08 +0000 UTC (0+1 container statuses recorded) May 14 12:30:21.175: INFO: Container kindnet-cni ready: true, restart count 0 May 14 12:30:21.175: INFO: local-path-provisioner-77cfdd744c-q47vg started at 2020-03-15 18:23:41 +0000 UTC (0+1 container statuses recorded) May 14 12:30:21.175: INFO: Container local-path-provisioner ready: true, restart count 5 May 14 12:30:21.175: INFO: kube-apiserver-hunter-control-plane started at (0+0 container statuses recorded) May 14 12:30:21.175: INFO: kube-controller-manager-hunter-control-plane started at (0+0 container statuses recorded) May 14 12:30:21.175: INFO: kube-scheduler-hunter-control-plane started at (0+0 container statuses recorded) W0514 12:30:21.178719 6 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 14 12:30:21.869: INFO: Latency metrics for node hunter-control-plane May 14 12:30:21.869: INFO: Logging node info for node hunter-worker May 14 12:30:23.064: INFO: Node Info: &Node{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:hunter-worker,GenerateName:,Namespace:,SelfLink:/api/v1/nodes/hunter-worker,UID:06f62848-66ea-11ea-99e8-0242ac110002,ResourceVersion:10532827,Generation:0,CreationTimestamp:2020-03-15 18:23:11 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{beta.kubernetes.io/arch: amd64,beta.kubernetes.io/os: linux,kubernetes.io/hostname: hunter-worker,},Annotations:map[string]string{kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock,node.alpha.kubernetes.io/ttl: 0,volumes.kubernetes.io/controller-managed-attach-detach: true,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUse_ExternalID:,ProviderID:,Unschedulable:false,Taints:[],ConfigSource:nil,},Status:NodeStatus{Capacity:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922129408 0} {} 131759892Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922129408 0} {} 131759892Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[{MemoryPressure False 2020-05-14 12:30:16 +0000 UTC 2020-03-15 18:23:11 +0000 UTC KubeletHasSufficientMemory kubelet has sufficient memory available} {DiskPressure False 2020-05-14 12:30:16 +0000 UTC 2020-03-15 18:23:11 +0000 UTC KubeletHasNoDiskPressure kubelet has no disk pressure} {PIDPressure False 2020-05-14 12:30:16 +0000 UTC 2020-03-15 18:23:11 +0000 UTC KubeletHasSufficientPID kubelet has sufficient PID available} {Ready True 2020-05-14 12:30:16 +0000 UTC 2020-03-15 18:23:32 +0000 UTC KubeletReady kubelet is posting ready status}],Addresses:[{InternalIP 172.17.0.3} {Hostname hunter-worker}],DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:1ba315df6f584c2d8a0cf4ead2df3551,SystemUUID:64c934e2-ea4e-48d7-92ee-50d04109360b,BootID:ca2aa731-f890-4956-92a1-ff8c7560d571,KernelVersion:4.15.0-88-generic,OSImage:Ubuntu 19.10,ContainerRuntimeVersion:containerd://1.3.2,KubeletVersion:v1.13.12,KubeProxyVersion:v1.13.12,OperatingSystem:linux,Architecture:amd64,},Images:[{[k8s.gcr.io/etcd:3.2.24] 219889590} {[k8s.gcr.io/kube-apiserver:v1.13.12] 182535474} {[k8s.gcr.io/kube-controller-manager:v1.13.12] 147799876} {[gcr.io/google-samples/gb-frontend@sha256:35cb427341429fac3df10ff74600ea73e8ec0754d78f9ce89e0b4f3d70d53ba6 gcr.io/google-samples/gb-frontend:v6] 142444388} {[docker.io/kindest/kindnetd:0.5.4] 113207016} {[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0] 85425365} {[k8s.gcr.io/kube-proxy:v1.13.12] 82073262} {[k8s.gcr.io/kube-scheduler:v1.13.12] 81117489} {[k8s.gcr.io/debian-base:v2.0.0] 53884301} {[docker.io/library/nginx@sha256:86ae264c3f4acb99b2dee4d0098c40cb8c46dcf9e1148f05d3a51c4df6758c12 docker.io/library/nginx@sha256:404ed8de56dd47adadadf9e2641b1ba6ad5ce69abf251421f91d7601a2808ebe docker.io/library/nginx:latest] 51030102} {[docker.io/library/nginx@sha256:d96d2b8f130247d1402389f80a6250382c0882e7fdd5484d2932e813e8b3742f docker.io/library/nginx@sha256:f1a695380f06cf363bf45fa85774cfcb5e60fe1557504715ff96a1933d6cbf51 docker.io/library/nginx@sha256:d81f010955749350ef31a119fb94b180fde8b2f157da351ff5667ae037968b28] 51030066} {[docker.io/library/nginx@sha256:282530fcb7cd19f3848c7b611043f82ae4be3781cb00105a1d593d7e6286b596 docker.io/library/nginx@sha256:e538de36780000ab3502edcdadd1e6990b981abc3f61f13584224b9e1674922c] 51022481} {[docker.io/library/nginx@sha256:2539d4344dd18e1df02be842ffc435f8e1f699cfc55516e2cf2cb16b7a9aea0b] 51021980} {[k8s.gcr.io/coredns:1.2.6] 40280546} {[gcr.io/google-samples/gb-redisslave@sha256:57730a481f97b3321138161ba2c8c9ca3b32df32ce9180e4029e6940446800ec gcr.io/google-samples/gb-redisslave:v3] 36655159} {[docker.io/rancher/local-path-provisioner:v0.0.11] 36513375} {[gcr.io/kubernetes-e2e-test-images/nettest@sha256:6aa91bc71993260a87513e31b672ec14ce84bc253cd5233406c6946d3a8f55a1 gcr.io/kubernetes-e2e-test-images/nettest:1.0] 7398578} {[docker.io/library/nginx@sha256:57a226fb6ab6823027c0704a9346a890ffb0cacde06bc19bbc234c8720673555 docker.io/library/nginx:1.15-alpine] 6999654} {[docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker.io/library/nginx:1.14-alpine] 6978806} {[gcr.io/kubernetes-e2e-test-images/dnsutils@sha256:2abeee84efb79c14d731966e034af33bf324d3b26ca28497555511ff094b3ddd gcr.io/kubernetes-e2e-test-images/dnsutils:1.1] 4331310} {[gcr.io/kubernetes-e2e-test-images/hostexec@sha256:90dfe59da029f9e536385037bc64e86cd3d6e55bae613ddbe69e554d79b0639d gcr.io/kubernetes-e2e-test-images/hostexec:1.1] 3854313} {[gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 gcr.io/kubernetes-e2e-test-images/redis:1.0] 2943605} {[gcr.io/kubernetes-e2e-test-images/netexec@sha256:203f0e11dde4baf4b08e27de094890eb3447d807c8b3e990b764b799d3a9e8b7 gcr.io/kubernetes-e2e-test-images/netexec:1.1] 2785431} {[gcr.io/kubernetes-e2e-test-images/serve-hostname@sha256:bab70473a6d8ef65a22625dc9a1b0f0452e811530fdbe77e4408523460177ff1 gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1] 2509546} {[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0] 1804628} {[gcr.io/kubernetes-e2e-test-images/kitten@sha256:bcbc4875c982ab39aa7c4f6acf4a287f604e996d9f34a3fbda8c3d1a7457d1f6 gcr.io/kubernetes-e2e-test-images/kitten:1.0] 1799936} {[gcr.io/kubernetes-e2e-test-images/test-webserver@sha256:7f93d6e32798ff28bc6289254d0c2867fe2c849c8e46edc50f8624734309812e gcr.io/kubernetes-e2e-test-images/test-webserver:1.0] 1791163} {[gcr.io/kubernetes-e2e-test-images/porter@sha256:d6389405e453950618ae7749d9eee388f0eb32e0328a7e6583c41433aa5f2a77 gcr.io/kubernetes-e2e-test-images/porter:1.0] 1772917} {[gcr.io/kubernetes-e2e-test-images/liveness@sha256:748662321b68a4b73b5a56961b61b980ad3683fc6bcae62c1306018fcdba1809 gcr.io/kubernetes-e2e-test-images/liveness:1.0] 1743226} {[gcr.io/kubernetes-e2e-test-images/entrypoint-tester@sha256:ba4681b5299884a3adca70fbde40638373b437a881055ffcd0935b5f43eb15c9 gcr.io/kubernetes-e2e-test-images/entrypoint-tester:1.0] 1039914} {[k8s.gcr.io/pause:3.1] 746479} {[docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 docker.io/library/busybox:1.29] 732685} {[gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 gcr.io/kubernetes-e2e-test-images/mounttest:1.0] 599341} {[gcr.io/kubernetes-e2e-test-images/mounttest-user@sha256:17319ca525ee003681fccf7e8c6b1b910ff4f49b653d939ac7f9b6e7c463933d gcr.io/kubernetes-e2e-test-images/mounttest-user:1.0] 539309}],VolumesInUse:[],VolumesAttached:[],Config:nil,},} May 14 12:30:23.064: INFO: Logging kubelet events for node hunter-worker May 14 12:30:23.405: INFO: Logging pods the kubelet thinks is on node hunter-worker May 14 12:30:23.962: INFO: kube-proxy-szbng started at 2020-03-15 18:23:11 +0000 UTC (0+1 container statuses recorded) May 14 12:30:23.962: INFO: Container kube-proxy ready: true, restart count 0 May 14 12:30:23.962: INFO: kindnet-54h7m started at 2020-03-15 18:23:12 +0000 UTC (0+1 container statuses recorded) May 14 12:30:23.962: INFO: Container kindnet-cni ready: true, restart count 0 May 14 12:30:23.962: INFO: coredns-54ff9cd656-4h7lb started at 2020-03-15 18:23:32 +0000 UTC (0+1 container statuses recorded) May 14 12:30:23.962: INFO: Container coredns ready: true, restart count 0 May 14 12:30:23.962: INFO: bin-false87d7c5bf-95de-11ea-9b22-0242ac110018 started at 2020-05-14 12:29:20 +0000 UTC (0+1 container statuses recorded) May 14 12:30:23.962: INFO: Container bin-false87d7c5bf-95de-11ea-9b22-0242ac110018 ready: false, restart count 0 W0514 12:30:24.664553 6 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 14 12:30:25.411: INFO: Latency metrics for node hunter-worker May 14 12:30:25.411: INFO: {Operation:create Method:pod_worker_latency_microseconds Quantile:0.9 Latency:1m2.535504s} May 14 12:30:25.411: INFO: {Operation:create Method:pod_worker_latency_microseconds Quantile:0.99 Latency:1m2.535504s} May 14 12:30:25.411: INFO: {Operation:create Method:pod_worker_latency_microseconds Quantile:0.5 Latency:23.743583s} May 14 12:30:25.411: INFO: {Operation: Method:pod_start_latency_microseconds Quantile:0.99 Latency:23.729546s} May 14 12:30:25.411: INFO: {Operation: Method:pod_start_latency_microseconds Quantile:0.9 Latency:19.339108s} May 14 12:30:25.411: INFO: Logging node info for node hunter-worker2 May 14 12:30:25.414: INFO: Node Info: &Node{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:hunter-worker2,GenerateName:,Namespace:,SelfLink:/api/v1/nodes/hunter-worker2,UID:073ca987-66ea-11ea-99e8-0242ac110002,ResourceVersion:10532835,Generation:0,CreationTimestamp:2020-03-15 18:23:11 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{beta.kubernetes.io/arch: amd64,beta.kubernetes.io/os: linux,kubernetes.io/hostname: hunter-worker2,},Annotations:map[string]string{kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock,node.alpha.kubernetes.io/ttl: 0,volumes.kubernetes.io/controller-managed-attach-detach: true,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUse_ExternalID:,ProviderID:,Unschedulable:false,Taints:[],ConfigSource:nil,},Status:NodeStatus{Capacity:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922129408 0} {} 131759892Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922129408 0} {} 131759892Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[{MemoryPressure False 2020-05-14 12:30:23 +0000 UTC 2020-03-15 18:23:11 +0000 UTC KubeletHasSufficientMemory kubelet has sufficient memory available} {DiskPressure False 2020-05-14 12:30:23 +0000 UTC 2020-03-15 18:23:11 +0000 UTC KubeletHasNoDiskPressure kubelet has no disk pressure} {PIDPressure False 2020-05-14 12:30:23 +0000 UTC 2020-03-15 18:23:11 +0000 UTC KubeletHasSufficientPID kubelet has sufficient PID available} {Ready True 2020-05-14 12:30:23 +0000 UTC 2020-03-15 18:23:32 +0000 UTC KubeletReady kubelet is posting ready status}],Addresses:[{InternalIP 172.17.0.4} {Hostname hunter-worker2}],DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:dde8970cf1ce42c0bbb19e593c484fda,SystemUUID:9c4b9179-843d-4e50-859c-2ca9335431a5,BootID:ca2aa731-f890-4956-92a1-ff8c7560d571,KernelVersion:4.15.0-88-generic,OSImage:Ubuntu 19.10,ContainerRuntimeVersion:containerd://1.3.2,KubeletVersion:v1.13.12,KubeProxyVersion:v1.13.12,OperatingSystem:linux,Architecture:amd64,},Images:[{[k8s.gcr.io/etcd:3.2.24] 219889590} {[k8s.gcr.io/kube-apiserver:v1.13.12] 182535474} {[k8s.gcr.io/kube-controller-manager:v1.13.12] 147799876} {[gcr.io/google-samples/gb-frontend@sha256:35cb427341429fac3df10ff74600ea73e8ec0754d78f9ce89e0b4f3d70d53ba6 gcr.io/google-samples/gb-frontend:v6] 142444388} {[docker.io/kindest/kindnetd:0.5.4] 113207016} {[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0] 85425365} {[k8s.gcr.io/kube-proxy:v1.13.12] 82073262} {[k8s.gcr.io/kube-scheduler:v1.13.12] 81117489} {[k8s.gcr.io/debian-base:v2.0.0] 53884301} {[docker.io/library/nginx@sha256:404ed8de56dd47adadadf9e2641b1ba6ad5ce69abf251421f91d7601a2808ebe docker.io/library/nginx@sha256:86ae264c3f4acb99b2dee4d0098c40cb8c46dcf9e1148f05d3a51c4df6758c12 docker.io/library/nginx:latest] 51030102} {[docker.io/library/nginx@sha256:d81f010955749350ef31a119fb94b180fde8b2f157da351ff5667ae037968b28 docker.io/library/nginx@sha256:d96d2b8f130247d1402389f80a6250382c0882e7fdd5484d2932e813e8b3742f] 51030066} {[docker.io/library/nginx@sha256:282530fcb7cd19f3848c7b611043f82ae4be3781cb00105a1d593d7e6286b596 docker.io/library/nginx@sha256:e538de36780000ab3502edcdadd1e6990b981abc3f61f13584224b9e1674922c] 51022481} {[docker.io/library/nginx@sha256:2539d4344dd18e1df02be842ffc435f8e1f699cfc55516e2cf2cb16b7a9aea0b] 51021980} {[k8s.gcr.io/coredns:1.2.6] 40280546} {[gcr.io/google-samples/gb-redisslave@sha256:57730a481f97b3321138161ba2c8c9ca3b32df32ce9180e4029e6940446800ec gcr.io/google-samples/gb-redisslave:v3] 36655159} {[docker.io/rancher/local-path-provisioner:v0.0.11] 36513375} {[gcr.io/kubernetes-e2e-test-images/nettest@sha256:6aa91bc71993260a87513e31b672ec14ce84bc253cd5233406c6946d3a8f55a1 gcr.io/kubernetes-e2e-test-images/nettest:1.0] 7398578} {[docker.io/library/nginx@sha256:57a226fb6ab6823027c0704a9346a890ffb0cacde06bc19bbc234c8720673555 docker.io/library/nginx:1.15-alpine] 6999654} {[docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker.io/library/nginx:1.14-alpine] 6978806} {[gcr.io/kubernetes-e2e-test-images/dnsutils@sha256:2abeee84efb79c14d731966e034af33bf324d3b26ca28497555511ff094b3ddd gcr.io/kubernetes-e2e-test-images/dnsutils:1.1] 4331310} {[gcr.io/kubernetes-e2e-test-images/hostexec@sha256:90dfe59da029f9e536385037bc64e86cd3d6e55bae613ddbe69e554d79b0639d gcr.io/kubernetes-e2e-test-images/hostexec:1.1] 3854313} {[gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 gcr.io/kubernetes-e2e-test-images/redis:1.0] 2943605} {[gcr.io/kubernetes-e2e-test-images/netexec@sha256:203f0e11dde4baf4b08e27de094890eb3447d807c8b3e990b764b799d3a9e8b7 gcr.io/kubernetes-e2e-test-images/netexec:1.1] 2785431} {[gcr.io/kubernetes-e2e-test-images/serve-hostname@sha256:bab70473a6d8ef65a22625dc9a1b0f0452e811530fdbe77e4408523460177ff1 gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1] 2509546} {[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0] 1804628} {[gcr.io/kubernetes-e2e-test-images/kitten@sha256:bcbc4875c982ab39aa7c4f6acf4a287f604e996d9f34a3fbda8c3d1a7457d1f6 gcr.io/kubernetes-e2e-test-images/kitten:1.0] 1799936} {[gcr.io/kubernetes-e2e-test-images/test-webserver@sha256:7f93d6e32798ff28bc6289254d0c2867fe2c849c8e46edc50f8624734309812e gcr.io/kubernetes-e2e-test-images/test-webserver:1.0] 1791163} {[gcr.io/kubernetes-e2e-test-images/porter@sha256:d6389405e453950618ae7749d9eee388f0eb32e0328a7e6583c41433aa5f2a77 gcr.io/kubernetes-e2e-test-images/porter:1.0] 1772917} {[gcr.io/kubernetes-e2e-test-images/liveness@sha256:748662321b68a4b73b5a56961b61b980ad3683fc6bcae62c1306018fcdba1809 gcr.io/kubernetes-e2e-test-images/liveness:1.0] 1743226} {[gcr.io/kubernetes-e2e-test-images/entrypoint-tester@sha256:ba4681b5299884a3adca70fbde40638373b437a881055ffcd0935b5f43eb15c9 gcr.io/kubernetes-e2e-test-images/entrypoint-tester:1.0] 1039914} {[k8s.gcr.io/pause:3.1] 746479} {[docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 docker.io/library/busybox:1.29] 732685} {[gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 gcr.io/kubernetes-e2e-test-images/mounttest:1.0] 599341} {[gcr.io/kubernetes-e2e-test-images/mounttest-user@sha256:17319ca525ee003681fccf7e8c6b1b910ff4f49b653d939ac7f9b6e7c463933d gcr.io/kubernetes-e2e-test-images/mounttest-user:1.0] 539309}],VolumesInUse:[],VolumesAttached:[],Config:nil,},} May 14 12:30:25.414: INFO: Logging kubelet events for node hunter-worker2 May 14 12:30:25.418: INFO: Logging pods the kubelet thinks is on node hunter-worker2 May 14 12:30:25.423: INFO: coredns-54ff9cd656-8vrkk started at 2020-03-15 18:23:32 +0000 UTC (0+1 container statuses recorded) May 14 12:30:25.423: INFO: Container coredns ready: true, restart count 0 May 14 12:30:25.423: INFO: kindnet-mtqrs started at 2020-03-15 18:23:12 +0000 UTC (0+1 container statuses recorded) May 14 12:30:25.423: INFO: Container kindnet-cni ready: true, restart count 0 May 14 12:30:25.423: INFO: kube-proxy-s52ll started at 2020-03-15 18:23:12 +0000 UTC (0+1 container statuses recorded) May 14 12:30:25.423: INFO: Container kube-proxy ready: true, restart count 0 W0514 12:30:25.425690 6 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 14 12:30:25.471: INFO: Latency metrics for node hunter-worker2 May 14 12:30:25.471: INFO: {Operation:create Method:pod_worker_latency_microseconds Quantile:0.99 Latency:1m4.883532s} May 14 12:30:25.471: INFO: {Operation:create Method:pod_worker_latency_microseconds Quantile:0.9 Latency:1m4.883532s} May 14 12:30:25.471: INFO: {Operation:create Method:pod_worker_latency_microseconds Quantile:0.5 Latency:1m1.693335s} May 14 12:30:25.471: INFO: {Operation: Method:pod_start_latency_microseconds Quantile:0.99 Latency:24.174983s} May 14 12:30:25.471: INFO: {Operation: Method:pod_start_latency_microseconds Quantile:0.9 Latency:10.714338s} May 14 12:30:25.471: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubelet-test-8fvxn" for this suite. May 14 12:30:55.725: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 12:30:55.748: INFO: namespace: e2e-tests-kubelet-test-8fvxn, resource: bindings, ignored listing per whitelist May 14 12:30:55.780: INFO: namespace e2e-tests-kubelet-test-8fvxn deletion completed in 30.305541718s • Failure [97.887 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78 should have an terminated reason [NodeConformance] [Conformance] [It] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Timed out after 60.000s. Expected <*errors.errorString | 0xc001e92bb0>: { s: "expected state to be terminated. Got pod status: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-14 12:29:20 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-14 12:29:20 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [bin-false87d7c5bf-95de-11ea-9b22-0242ac110018]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-14 12:29:20 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [bin-false87d7c5bf-95de-11ea-9b22-0242ac110018]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-14 12:29:19 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:172.17.0.3 PodIP: StartTime:2020-05-14 12:29:20 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:bin-false87d7c5bf-95de-11ea-9b22-0242ac110018 State:{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:docker.io/library/busybox:1.29 ImageID: ContainerID:}] QOSClass:BestEffort}", } to be nil /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:123 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 14 12:30:55.780: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0777 on tmpfs May 14 12:30:56.359: INFO: Waiting up to 5m0s for pod "pod-c24f7518-95de-11ea-9b22-0242ac110018" in namespace "e2e-tests-emptydir-wg5pj" to be "success or failure" May 14 12:30:56.472: INFO: Pod "pod-c24f7518-95de-11ea-9b22-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 113.451789ms May 14 12:30:59.298: INFO: Pod "pod-c24f7518-95de-11ea-9b22-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.939170964s May 14 12:31:01.301: INFO: Pod "pod-c24f7518-95de-11ea-9b22-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 4.942412296s May 14 12:31:03.305: INFO: Pod "pod-c24f7518-95de-11ea-9b22-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 6.946197476s May 14 12:31:05.308: INFO: Pod "pod-c24f7518-95de-11ea-9b22-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 8.949216362s May 14 12:31:08.922: INFO: Pod "pod-c24f7518-95de-11ea-9b22-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 12.563130632s May 14 12:31:10.926: INFO: Pod "pod-c24f7518-95de-11ea-9b22-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 14.566938448s May 14 12:31:13.174: INFO: Pod "pod-c24f7518-95de-11ea-9b22-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 16.815666001s May 14 12:31:15.177: INFO: Pod "pod-c24f7518-95de-11ea-9b22-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 18.818535397s May 14 12:31:17.182: INFO: Pod "pod-c24f7518-95de-11ea-9b22-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 20.82292427s May 14 12:31:19.849: INFO: Pod "pod-c24f7518-95de-11ea-9b22-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 23.490462629s May 14 12:31:21.914: INFO: Pod "pod-c24f7518-95de-11ea-9b22-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 25.555267768s May 14 12:31:23.917: INFO: Pod "pod-c24f7518-95de-11ea-9b22-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 27.558453503s May 14 12:31:25.920: INFO: Pod "pod-c24f7518-95de-11ea-9b22-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 29.561481794s May 14 12:31:28.094: INFO: Pod "pod-c24f7518-95de-11ea-9b22-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 31.73585709s May 14 12:31:31.436: INFO: Pod "pod-c24f7518-95de-11ea-9b22-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 35.077743111s May 14 12:31:33.439: INFO: Pod "pod-c24f7518-95de-11ea-9b22-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 37.080285587s May 14 12:31:36.316: INFO: Pod "pod-c24f7518-95de-11ea-9b22-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 39.957621674s May 14 12:31:38.320: INFO: Pod "pod-c24f7518-95de-11ea-9b22-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 41.961647566s May 14 12:31:40.631: INFO: Pod "pod-c24f7518-95de-11ea-9b22-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 44.272049899s May 14 12:31:42.927: INFO: Pod "pod-c24f7518-95de-11ea-9b22-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 46.568496608s May 14 12:31:46.403: INFO: Pod "pod-c24f7518-95de-11ea-9b22-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 50.044210512s May 14 12:31:48.898: INFO: Pod "pod-c24f7518-95de-11ea-9b22-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 52.539465252s May 14 12:31:51.128: INFO: Pod "pod-c24f7518-95de-11ea-9b22-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 54.769290052s May 14 12:31:53.340: INFO: Pod "pod-c24f7518-95de-11ea-9b22-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 56.981197099s May 14 12:31:55.345: INFO: Pod "pod-c24f7518-95de-11ea-9b22-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 58.985934266s May 14 12:31:57.347: INFO: Pod "pod-c24f7518-95de-11ea-9b22-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 1m0.988376647s May 14 12:31:59.975: INFO: Pod "pod-c24f7518-95de-11ea-9b22-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 1m3.616819562s May 14 12:32:02.346: INFO: Pod "pod-c24f7518-95de-11ea-9b22-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 1m5.987367177s May 14 12:32:04.443: INFO: Pod "pod-c24f7518-95de-11ea-9b22-0242ac110018": Phase="Running", Reason="", readiness=true. Elapsed: 1m8.083904732s May 14 12:32:06.446: INFO: Pod "pod-c24f7518-95de-11ea-9b22-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 1m10.087479533s STEP: Saw pod success May 14 12:32:06.446: INFO: Pod "pod-c24f7518-95de-11ea-9b22-0242ac110018" satisfied condition "success or failure" May 14 12:32:06.449: INFO: Trying to get logs from node hunter-worker2 pod pod-c24f7518-95de-11ea-9b22-0242ac110018 container test-container: STEP: delete the pod May 14 12:32:06.719: INFO: Waiting for pod pod-c24f7518-95de-11ea-9b22-0242ac110018 to disappear May 14 12:32:06.831: INFO: Pod pod-c24f7518-95de-11ea-9b22-0242ac110018 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 14 12:32:06.831: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-wg5pj" for this suite. May 14 12:32:17.083: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 12:32:17.134: INFO: namespace: e2e-tests-emptydir-wg5pj, resource: bindings, ignored listing per whitelist May 14 12:32:17.144: INFO: namespace e2e-tests-emptydir-wg5pj deletion completed in 10.310090191s • [SLOW TEST:81.364 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 14 12:32:17.144: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default;check="$$(dig +tcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;test -n "$$(getent hosts dns-querier-1.dns-test-service.e2e-tests-dns-s8nl6.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-s8nl6.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-s8nl6.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default;check="$$(dig +tcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;test -n "$$(getent hosts dns-querier-1.dns-test-service.e2e-tests-dns-s8nl6.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-s8nl6.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-s8nl6.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 14 12:33:44.444: INFO: DNS probes using e2e-tests-dns-s8nl6/dns-test-f3c31ceb-95de-11ea-9b22-0242ac110018 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 14 12:33:45.783: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-dns-s8nl6" for this suite. May 14 12:33:58.116: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 12:33:58.169: INFO: namespace: e2e-tests-dns-s8nl6, resource: bindings, ignored listing per whitelist May 14 12:33:58.174: INFO: namespace e2e-tests-dns-s8nl6 deletion completed in 11.443218623s • [SLOW TEST:101.029 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 14 12:33:58.174: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65 [It] deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 14 12:33:59.477: INFO: Creating deployment "nginx-deployment" May 14 12:34:00.746: INFO: Waiting for observed generation 1 May 14 12:34:08.907: INFO: Waiting for all required pods to come up May 14 12:34:10.437: INFO: Pod name nginx: Found 10 pods out of 10 STEP: ensuring each pod is running May 14 12:35:53.849: INFO: Waiting for deployment "nginx-deployment" to complete May 14 12:35:54.447: INFO: Updating deployment "nginx-deployment" with a non-existent image May 14 12:35:54.795: INFO: Updating deployment nginx-deployment May 14 12:35:54.795: INFO: Waiting for observed generation 2 May 14 12:36:00.497: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 May 14 12:36:18.171: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 May 14 12:36:20.055: INFO: Waiting for the first rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas May 14 12:36:23.279: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 May 14 12:36:23.279: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 May 14 12:36:24.878: INFO: Waiting for the second rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas May 14 12:36:26.872: INFO: Verifying that deployment "nginx-deployment" has minimum required number of available replicas May 14 12:36:26.872: INFO: Scaling up the deployment "nginx-deployment" from 10 to 30 May 14 12:36:28.615: INFO: Updating deployment nginx-deployment May 14 12:36:28.615: INFO: Waiting for the replicasets of deployment "nginx-deployment" to have desired number of replicas May 14 12:36:31.889: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 May 14 12:36:40.556: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59 May 14 12:36:45.945: INFO: Deployment "nginx-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment,GenerateName:,Namespace:e2e-tests-deployment-86x79,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-86x79/deployments/nginx-deployment,UID:2f76ec00-95df-11ea-99e8-0242ac110002,ResourceVersion:10533670,Generation:3,CreationTimestamp:2020-05-14 12:33:59 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*30,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:21,UpdatedReplicas:13,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[{Available False 2020-05-14 12:36:30 +0000 UTC 2020-05-14 12:36:30 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2020-05-14 12:36:36 +0000 UTC 2020-05-14 12:34:05 +0000 UTC ReplicaSetUpdated ReplicaSet "nginx-deployment-5c98f8fb5" is progressing.}],ReadyReplicas:8,CollisionCount:nil,},} May 14 12:36:48.457: INFO: New ReplicaSet "nginx-deployment-5c98f8fb5" of Deployment "nginx-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5,GenerateName:,Namespace:e2e-tests-deployment-86x79,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-86x79/replicasets/nginx-deployment-5c98f8fb5,UID:74331e43-95df-11ea-99e8-0242ac110002,ResourceVersion:10533657,Generation:3,CreationTimestamp:2020-05-14 12:35:54 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment nginx-deployment 2f76ec00-95df-11ea-99e8-0242ac110002 0xc0011242b7 0xc0011242b8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*13,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:13,FullyLabeledReplicas:13,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} May 14 12:36:48.457: INFO: All old ReplicaSets of Deployment "nginx-deployment": May 14 12:36:48.457: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d,GenerateName:,Namespace:e2e-tests-deployment-86x79,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-86x79/replicasets/nginx-deployment-85ddf47c5d,UID:32c1a7b2-95df-11ea-99e8-0242ac110002,ResourceVersion:10533655,Generation:3,CreationTimestamp:2020-05-14 12:34:05 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment nginx-deployment 2f76ec00-95df-11ea-99e8-0242ac110002 0xc0011243d7 0xc0011243d8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*20,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:8,FullyLabeledReplicas:8,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[],},} May 14 12:36:51.400: INFO: Pod "nginx-deployment-5c98f8fb5-25dd6" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-25dd6,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-86x79,SelfLink:/api/v1/namespaces/e2e-tests-deployment-86x79/pods/nginx-deployment-5c98f8fb5-25dd6,UID:8b1b0c20-95df-11ea-99e8-0242ac110002,ResourceVersion:10533642,Generation:0,CreationTimestamp:2020-05-14 12:36:33 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 74331e43-95df-11ea-99e8-0242ac110002 0xc000d72c47 0xc000d72c48}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-2tmb2 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-2tmb2,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-2tmb2 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc000d72cc0} {node.kubernetes.io/unreachable Exists NoExecute 0xc000d72ce0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 12:36:33 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 14 12:36:51.400: INFO: Pod "nginx-deployment-5c98f8fb5-5bkr6" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-5bkr6,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-86x79,SelfLink:/api/v1/namespaces/e2e-tests-deployment-86x79/pods/nginx-deployment-5c98f8fb5-5bkr6,UID:8ae0657e-95df-11ea-99e8-0242ac110002,ResourceVersion:10533636,Generation:0,CreationTimestamp:2020-05-14 12:36:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 74331e43-95df-11ea-99e8-0242ac110002 0xc000d72d57 0xc000d72d58}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-2tmb2 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-2tmb2,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-2tmb2 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc000d72dd0} {node.kubernetes.io/unreachable Exists NoExecute 0xc000d72df0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 12:36:33 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 14 12:36:51.401: INFO: Pod "nginx-deployment-5c98f8fb5-5kvbm" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-5kvbm,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-86x79,SelfLink:/api/v1/namespaces/e2e-tests-deployment-86x79/pods/nginx-deployment-5c98f8fb5-5kvbm,UID:8abf20b6-95df-11ea-99e8-0242ac110002,ResourceVersion:10533671,Generation:0,CreationTimestamp:2020-05-14 12:36:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 74331e43-95df-11ea-99e8-0242ac110002 0xc000d72e67 0xc000d72e68}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-2tmb2 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-2tmb2,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-2tmb2 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc000d72ee0} {node.kubernetes.io/unreachable Exists NoExecute 0xc000d72f00}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 12:36:33 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-14 12:36:33 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-14 12:36:33 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 12:36:32 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.4,PodIP:,StartTime:2020-05-14 12:36:33 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 14 12:36:51.401: INFO: Pod "nginx-deployment-5c98f8fb5-6jns2" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-6jns2,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-86x79,SelfLink:/api/v1/namespaces/e2e-tests-deployment-86x79/pods/nginx-deployment-5c98f8fb5-6jns2,UID:8ae053f6-95df-11ea-99e8-0242ac110002,ResourceVersion:10533626,Generation:0,CreationTimestamp:2020-05-14 12:36:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 74331e43-95df-11ea-99e8-0242ac110002 0xc000d72fc7 0xc000d72fc8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-2tmb2 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-2tmb2,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-2tmb2 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc000d73040} {node.kubernetes.io/unreachable Exists NoExecute 0xc000d73060}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 12:36:33 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 14 12:36:51.401: INFO: Pod "nginx-deployment-5c98f8fb5-7vq7k" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-7vq7k,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-86x79,SelfLink:/api/v1/namespaces/e2e-tests-deployment-86x79/pods/nginx-deployment-5c98f8fb5-7vq7k,UID:748ebed5-95df-11ea-99e8-0242ac110002,ResourceVersion:10533584,Generation:0,CreationTimestamp:2020-05-14 12:35:55 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 74331e43-95df-11ea-99e8-0242ac110002 0xc000d73187 0xc000d73188}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-2tmb2 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-2tmb2,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-2tmb2 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc000d73200} {node.kubernetes.io/unreachable Exists NoExecute 0xc000d73220}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 12:35:56 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-14 12:35:56 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-14 12:35:56 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 12:35:55 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.3,PodIP:10.244.1.96,StartTime:2020-05-14 12:35:56 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = NotFound desc = failed to pull and unpack image "docker.io/library/nginx:404": failed to resolve reference "docker.io/library/nginx:404": docker.io/library/nginx:404: not found,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 14 12:36:51.401: INFO: Pod "nginx-deployment-5c98f8fb5-bsrrk" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-bsrrk,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-86x79,SelfLink:/api/v1/namespaces/e2e-tests-deployment-86x79/pods/nginx-deployment-5c98f8fb5-bsrrk,UID:7591f55c-95df-11ea-99e8-0242ac110002,ResourceVersion:10533575,Generation:0,CreationTimestamp:2020-05-14 12:35:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 74331e43-95df-11ea-99e8-0242ac110002 0xc000d733b7 0xc000d733b8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-2tmb2 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-2tmb2,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-2tmb2 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc000d734b0} {node.kubernetes.io/unreachable Exists NoExecute 0xc000d734d0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 12:36:02 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-14 12:36:02 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-14 12:36:02 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 12:35:58 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.4,PodIP:10.244.2.188,StartTime:2020-05-14 12:36:02 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ImagePullBackOff,Message:Back-off pulling image "nginx:404",} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 14 12:36:51.402: INFO: Pod "nginx-deployment-5c98f8fb5-ggtbk" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-ggtbk,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-86x79,SelfLink:/api/v1/namespaces/e2e-tests-deployment-86x79/pods/nginx-deployment-5c98f8fb5-ggtbk,UID:8ae061ad-95df-11ea-99e8-0242ac110002,ResourceVersion:10533694,Generation:0,CreationTimestamp:2020-05-14 12:36:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 74331e43-95df-11ea-99e8-0242ac110002 0xc000d73647 0xc000d73648}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-2tmb2 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-2tmb2,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-2tmb2 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc000d73700} {node.kubernetes.io/unreachable Exists NoExecute 0xc000d73720}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 12:36:41 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-14 12:36:41 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-14 12:36:41 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 12:36:33 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.4,PodIP:,StartTime:2020-05-14 12:36:41 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 14 12:36:51.402: INFO: Pod "nginx-deployment-5c98f8fb5-hbvkl" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-hbvkl,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-86x79,SelfLink:/api/v1/namespaces/e2e-tests-deployment-86x79/pods/nginx-deployment-5c98f8fb5-hbvkl,UID:796ab343-95df-11ea-99e8-0242ac110002,ResourceVersion:10533590,Generation:0,CreationTimestamp:2020-05-14 12:36:03 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 74331e43-95df-11ea-99e8-0242ac110002 0xc000d73877 0xc000d73878}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-2tmb2 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-2tmb2,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-2tmb2 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc000d73950} {node.kubernetes.io/unreachable Exists NoExecute 0xc000d739e0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 12:36:07 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-14 12:36:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-14 12:36:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 12:36:04 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.3,PodIP:10.244.1.97,StartTime:2020-05-14 12:36:07 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ImagePullBackOff,Message:Back-off pulling image "nginx:404",} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 14 12:36:51.402: INFO: Pod "nginx-deployment-5c98f8fb5-k5jv2" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-k5jv2,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-86x79,SelfLink:/api/v1/namespaces/e2e-tests-deployment-86x79/pods/nginx-deployment-5c98f8fb5-k5jv2,UID:8abf3b8f-95df-11ea-99e8-0242ac110002,ResourceVersion:10533680,Generation:0,CreationTimestamp:2020-05-14 12:36:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 74331e43-95df-11ea-99e8-0242ac110002 0xc000d73bf7 0xc000d73bf8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-2tmb2 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-2tmb2,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-2tmb2 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc000d73cc0} {node.kubernetes.io/unreachable Exists NoExecute 0xc000d73cf0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 12:36:33 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-14 12:36:33 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-14 12:36:33 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 12:36:32 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.3,PodIP:,StartTime:2020-05-14 12:36:33 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 14 12:36:51.402: INFO: Pod "nginx-deployment-5c98f8fb5-kmr5f" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-kmr5f,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-86x79,SelfLink:/api/v1/namespaces/e2e-tests-deployment-86x79/pods/nginx-deployment-5c98f8fb5-kmr5f,UID:7591f959-95df-11ea-99e8-0242ac110002,ResourceVersion:10533585,Generation:0,CreationTimestamp:2020-05-14 12:35:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 74331e43-95df-11ea-99e8-0242ac110002 0xc000d73ec7 0xc000d73ec8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-2tmb2 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-2tmb2,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-2tmb2 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc000d73f70} {node.kubernetes.io/unreachable Exists NoExecute 0xc000d73f90}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 12:36:01 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-14 12:36:01 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-14 12:36:01 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 12:35:58 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.4,PodIP:10.244.2.187,StartTime:2020-05-14 12:36:01 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ImagePullBackOff,Message:Back-off pulling image "nginx:404",} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 14 12:36:51.402: INFO: Pod "nginx-deployment-5c98f8fb5-kwsp9" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-kwsp9,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-86x79,SelfLink:/api/v1/namespaces/e2e-tests-deployment-86x79/pods/nginx-deployment-5c98f8fb5-kwsp9,UID:8ae0777e-95df-11ea-99e8-0242ac110002,ResourceVersion:10533627,Generation:0,CreationTimestamp:2020-05-14 12:36:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 74331e43-95df-11ea-99e8-0242ac110002 0xc00124a177 0xc00124a178}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-2tmb2 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-2tmb2,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-2tmb2 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00124a250} {node.kubernetes.io/unreachable Exists NoExecute 0xc00124a270}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 12:36:33 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 14 12:36:51.402: INFO: Pod "nginx-deployment-5c98f8fb5-n5tvs" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-n5tvs,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-86x79,SelfLink:/api/v1/namespaces/e2e-tests-deployment-86x79/pods/nginx-deployment-5c98f8fb5-n5tvs,UID:79cf5a54-95df-11ea-99e8-0242ac110002,ResourceVersion:10533598,Generation:0,CreationTimestamp:2020-05-14 12:36:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 74331e43-95df-11ea-99e8-0242ac110002 0xc00124a2e7 0xc00124a2e8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-2tmb2 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-2tmb2,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-2tmb2 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00124a370} {node.kubernetes.io/unreachable Exists NoExecute 0xc00124a390}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 12:36:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-14 12:36:08 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-14 12:36:08 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 12:36:05 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.4,PodIP:10.244.2.189,StartTime:2020-05-14 12:36:08 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ImagePullBackOff,Message:Back-off pulling image "nginx:404",} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 14 12:36:51.403: INFO: Pod "nginx-deployment-5c98f8fb5-wvhj8" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-wvhj8,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-86x79,SelfLink:/api/v1/namespaces/e2e-tests-deployment-86x79/pods/nginx-deployment-5c98f8fb5-wvhj8,UID:8ab63ca9-95df-11ea-99e8-0242ac110002,ResourceVersion:10533664,Generation:0,CreationTimestamp:2020-05-14 12:36:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 74331e43-95df-11ea-99e8-0242ac110002 0xc00124a497 0xc00124a498}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-2tmb2 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-2tmb2,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-2tmb2 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00124a510} {node.kubernetes.io/unreachable Exists NoExecute 0xc00124a530}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 12:36:33 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-14 12:36:33 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-14 12:36:33 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 12:36:32 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.3,PodIP:,StartTime:2020-05-14 12:36:33 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 14 12:36:51.403: INFO: Pod "nginx-deployment-85ddf47c5d-2hxxz" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-2hxxz,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-86x79,SelfLink:/api/v1/namespaces/e2e-tests-deployment-86x79/pods/nginx-deployment-85ddf47c5d-2hxxz,UID:34c82955-95df-11ea-99e8-0242ac110002,ResourceVersion:10533443,Generation:0,CreationTimestamp:2020-05-14 12:34:08 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 32c1a7b2-95df-11ea-99e8-0242ac110002 0xc00124a5f7 0xc00124a5f8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-2tmb2 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-2tmb2,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-2tmb2 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00124a680} {node.kubernetes.io/unreachable Exists NoExecute 0xc00124a6a0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 12:34:10 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 12:35:43 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 12:35:43 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 12:34:08 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.3,PodIP:10.244.1.92,StartTime:2020-05-14 12:34:10 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-05-14 12:35:42 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://52e065d23c1a703e42c7644c8044810a9f865d8140b41e93bcfd7ed1949a2cb1}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 14 12:36:51.403: INFO: Pod "nginx-deployment-85ddf47c5d-588cw" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-588cw,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-86x79,SelfLink:/api/v1/namespaces/e2e-tests-deployment-86x79/pods/nginx-deployment-85ddf47c5d-588cw,UID:3515a8fe-95df-11ea-99e8-0242ac110002,ResourceVersion:10533459,Generation:0,CreationTimestamp:2020-05-14 12:34:08 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 32c1a7b2-95df-11ea-99e8-0242ac110002 0xc00124a767 0xc00124a768}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-2tmb2 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-2tmb2,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-2tmb2 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00124a800} {node.kubernetes.io/unreachable Exists NoExecute 0xc00124a820}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 12:34:14 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 12:35:43 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 12:35:43 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 12:34:10 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.3,PodIP:10.244.1.95,StartTime:2020-05-14 12:34:14 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-05-14 12:35:42 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://92a04d8b86266bfbe4d8052ea0cda64ee722118a3c8988eb681d21482e4e388e}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 14 12:36:51.403: INFO: Pod "nginx-deployment-85ddf47c5d-6f4xq" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-6f4xq,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-86x79,SelfLink:/api/v1/namespaces/e2e-tests-deployment-86x79/pods/nginx-deployment-85ddf47c5d-6f4xq,UID:34c84fe9-95df-11ea-99e8-0242ac110002,ResourceVersion:10533463,Generation:0,CreationTimestamp:2020-05-14 12:34:08 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 32c1a7b2-95df-11ea-99e8-0242ac110002 0xc00124a947 0xc00124a948}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-2tmb2 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-2tmb2,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-2tmb2 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00124abd0} {node.kubernetes.io/unreachable Exists NoExecute 0xc00124abf0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 12:34:11 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 12:35:43 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 12:35:43 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 12:34:08 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.3,PodIP:10.244.1.93,StartTime:2020-05-14 12:34:11 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-05-14 12:35:42 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://ce883ad7a072e22fce9a998edc4ba08386619baed4437ed8b135ee27b757f28e}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 14 12:36:51.403: INFO: Pod "nginx-deployment-85ddf47c5d-d5v8v" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-d5v8v,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-86x79,SelfLink:/api/v1/namespaces/e2e-tests-deployment-86x79/pods/nginx-deployment-85ddf47c5d-d5v8v,UID:8b8ecc0e-95df-11ea-99e8-0242ac110002,ResourceVersion:10533660,Generation:0,CreationTimestamp:2020-05-14 12:36:33 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 32c1a7b2-95df-11ea-99e8-0242ac110002 0xc00124ad77 0xc00124ad78}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-2tmb2 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-2tmb2,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-2tmb2 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00124afd0} {node.kubernetes.io/unreachable Exists NoExecute 0xc00124b010}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 12:36:34 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 14 12:36:51.403: INFO: Pod "nginx-deployment-85ddf47c5d-db5r9" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-db5r9,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-86x79,SelfLink:/api/v1/namespaces/e2e-tests-deployment-86x79/pods/nginx-deployment-85ddf47c5d-db5r9,UID:3515a557-95df-11ea-99e8-0242ac110002,ResourceVersion:10533431,Generation:0,CreationTimestamp:2020-05-14 12:34:08 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 32c1a7b2-95df-11ea-99e8-0242ac110002 0xc00124b0f7 0xc00124b0f8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-2tmb2 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-2tmb2,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-2tmb2 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00124b190} {node.kubernetes.io/unreachable Exists NoExecute 0xc00124b210}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 12:34:14 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 12:35:43 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 12:35:43 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 12:34:10 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.4,PodIP:10.244.2.186,StartTime:2020-05-14 12:34:14 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-05-14 12:35:42 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://58ef8e2b4e5b9b256f28c06676e1c9a73f12044382c97813a23d09f7a7b777b0}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 14 12:36:51.404: INFO: Pod "nginx-deployment-85ddf47c5d-ff9n7" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-ff9n7,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-86x79,SelfLink:/api/v1/namespaces/e2e-tests-deployment-86x79/pods/nginx-deployment-85ddf47c5d-ff9n7,UID:8b8ece78-95df-11ea-99e8-0242ac110002,ResourceVersion:10533662,Generation:0,CreationTimestamp:2020-05-14 12:36:33 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 32c1a7b2-95df-11ea-99e8-0242ac110002 0xc00124b377 0xc00124b378}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-2tmb2 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-2tmb2,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-2tmb2 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00124b4e0} {node.kubernetes.io/unreachable Exists NoExecute 0xc00124b560}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 12:36:34 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 14 12:36:51.404: INFO: Pod "nginx-deployment-85ddf47c5d-fvtgh" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-fvtgh,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-86x79,SelfLink:/api/v1/namespaces/e2e-tests-deployment-86x79/pods/nginx-deployment-85ddf47c5d-fvtgh,UID:8b1b1036-95df-11ea-99e8-0242ac110002,ResourceVersion:10533650,Generation:0,CreationTimestamp:2020-05-14 12:36:33 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 32c1a7b2-95df-11ea-99e8-0242ac110002 0xc00124b657 0xc00124b658}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-2tmb2 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-2tmb2,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-2tmb2 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00124b750} {node.kubernetes.io/unreachable Exists NoExecute 0xc00124b9a0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 12:36:33 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 14 12:36:51.404: INFO: Pod "nginx-deployment-85ddf47c5d-js2nr" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-js2nr,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-86x79,SelfLink:/api/v1/namespaces/e2e-tests-deployment-86x79/pods/nginx-deployment-85ddf47c5d-js2nr,UID:8b1b0489-95df-11ea-99e8-0242ac110002,ResourceVersion:10533649,Generation:0,CreationTimestamp:2020-05-14 12:36:33 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 32c1a7b2-95df-11ea-99e8-0242ac110002 0xc00124bc27 0xc00124bc28}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-2tmb2 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-2tmb2,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-2tmb2 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00124bf50} {node.kubernetes.io/unreachable Exists NoExecute 0xc00124bf90}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 12:36:33 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 14 12:36:51.404: INFO: Pod "nginx-deployment-85ddf47c5d-ljqt7" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-ljqt7,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-86x79,SelfLink:/api/v1/namespaces/e2e-tests-deployment-86x79/pods/nginx-deployment-85ddf47c5d-ljqt7,UID:8ab631ac-95df-11ea-99e8-0242ac110002,ResourceVersion:10533687,Generation:0,CreationTimestamp:2020-05-14 12:36:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 32c1a7b2-95df-11ea-99e8-0242ac110002 0xc00186a027 0xc00186a028}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-2tmb2 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-2tmb2,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-2tmb2 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00186a0a0} {node.kubernetes.io/unreachable Exists NoExecute 0xc00186a0c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 12:36:36 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-14 12:36:36 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-14 12:36:36 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 12:36:32 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.4,PodIP:,StartTime:2020-05-14 12:36:36 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 14 12:36:51.404: INFO: Pod "nginx-deployment-85ddf47c5d-nnt97" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-nnt97,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-86x79,SelfLink:/api/v1/namespaces/e2e-tests-deployment-86x79/pods/nginx-deployment-85ddf47c5d-nnt97,UID:8ae02e54-95df-11ea-99e8-0242ac110002,ResourceVersion:10533635,Generation:0,CreationTimestamp:2020-05-14 12:36:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 32c1a7b2-95df-11ea-99e8-0242ac110002 0xc00186a2d7 0xc00186a2d8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-2tmb2 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-2tmb2,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-2tmb2 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00186a3f0} {node.kubernetes.io/unreachable Exists NoExecute 0xc00186a410}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 12:36:33 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 14 12:36:51.404: INFO: Pod "nginx-deployment-85ddf47c5d-pvzk5" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-pvzk5,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-86x79,SelfLink:/api/v1/namespaces/e2e-tests-deployment-86x79/pods/nginx-deployment-85ddf47c5d-pvzk5,UID:8ae03923-95df-11ea-99e8-0242ac110002,ResourceVersion:10533637,Generation:0,CreationTimestamp:2020-05-14 12:36:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 32c1a7b2-95df-11ea-99e8-0242ac110002 0xc00186a527 0xc00186a528}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-2tmb2 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-2tmb2,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-2tmb2 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00186a600} {node.kubernetes.io/unreachable Exists NoExecute 0xc00186a690}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 12:36:33 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 14 12:36:51.405: INFO: Pod "nginx-deployment-85ddf47c5d-q9zp5" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-q9zp5,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-86x79,SelfLink:/api/v1/namespaces/e2e-tests-deployment-86x79/pods/nginx-deployment-85ddf47c5d-q9zp5,UID:8b1b0f02-95df-11ea-99e8-0242ac110002,ResourceVersion:10533651,Generation:0,CreationTimestamp:2020-05-14 12:36:33 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 32c1a7b2-95df-11ea-99e8-0242ac110002 0xc00186a707 0xc00186a708}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-2tmb2 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-2tmb2,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-2tmb2 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00186a7b0} {node.kubernetes.io/unreachable Exists NoExecute 0xc00186a7d0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 12:36:33 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 14 12:36:51.405: INFO: Pod "nginx-deployment-85ddf47c5d-qn42j" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-qn42j,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-86x79,SelfLink:/api/v1/namespaces/e2e-tests-deployment-86x79/pods/nginx-deployment-85ddf47c5d-qn42j,UID:8b8ec0b1-95df-11ea-99e8-0242ac110002,ResourceVersion:10533659,Generation:0,CreationTimestamp:2020-05-14 12:36:33 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 32c1a7b2-95df-11ea-99e8-0242ac110002 0xc00186a847 0xc00186a848}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-2tmb2 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-2tmb2,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-2tmb2 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00186add0} {node.kubernetes.io/unreachable Exists NoExecute 0xc00186adf0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 12:36:34 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 14 12:36:51.405: INFO: Pod "nginx-deployment-85ddf47c5d-rv2jn" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-rv2jn,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-86x79,SelfLink:/api/v1/namespaces/e2e-tests-deployment-86x79/pods/nginx-deployment-85ddf47c5d-rv2jn,UID:34c833a2-95df-11ea-99e8-0242ac110002,ResourceVersion:10533438,Generation:0,CreationTimestamp:2020-05-14 12:34:08 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 32c1a7b2-95df-11ea-99e8-0242ac110002 0xc00186aed7 0xc00186aed8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-2tmb2 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-2tmb2,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-2tmb2 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00186af80} {node.kubernetes.io/unreachable Exists NoExecute 0xc00186b190}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 12:34:11 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 12:35:43 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 12:35:43 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 12:34:08 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.4,PodIP:10.244.2.184,StartTime:2020-05-14 12:34:11 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-05-14 12:35:42 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://a36cea13494c4c7446fad11f4f5f381546366964dc25e3ab68a277455e64ffaa}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 14 12:36:51.405: INFO: Pod "nginx-deployment-85ddf47c5d-s4674" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-s4674,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-86x79,SelfLink:/api/v1/namespaces/e2e-tests-deployment-86x79/pods/nginx-deployment-85ddf47c5d-s4674,UID:8b8ecc99-95df-11ea-99e8-0242ac110002,ResourceVersion:10533663,Generation:0,CreationTimestamp:2020-05-14 12:36:33 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 32c1a7b2-95df-11ea-99e8-0242ac110002 0xc00186b497 0xc00186b498}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-2tmb2 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-2tmb2,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-2tmb2 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00186b5e0} {node.kubernetes.io/unreachable Exists NoExecute 0xc00186b600}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 12:36:34 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 14 12:36:51.405: INFO: Pod "nginx-deployment-85ddf47c5d-s57pm" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-s57pm,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-86x79,SelfLink:/api/v1/namespaces/e2e-tests-deployment-86x79/pods/nginx-deployment-85ddf47c5d-s57pm,UID:34854425-95df-11ea-99e8-0242ac110002,ResourceVersion:10533458,Generation:0,CreationTimestamp:2020-05-14 12:34:07 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 32c1a7b2-95df-11ea-99e8-0242ac110002 0xc00186b677 0xc00186b678}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-2tmb2 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-2tmb2,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-2tmb2 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00186b760} {node.kubernetes.io/unreachable Exists NoExecute 0xc00186b780}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 12:34:10 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 12:35:43 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 12:35:43 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 12:34:08 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.4,PodIP:10.244.2.182,StartTime:2020-05-14 12:34:10 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-05-14 12:35:42 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://8bde6a0a0755e770a9aa985f7d8f090ede52120e3d90e07d146ae924f629eae5}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 14 12:36:51.405: INFO: Pod "nginx-deployment-85ddf47c5d-v7k8k" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-v7k8k,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-86x79,SelfLink:/api/v1/namespaces/e2e-tests-deployment-86x79/pods/nginx-deployment-85ddf47c5d-v7k8k,UID:3484b19e-95df-11ea-99e8-0242ac110002,ResourceVersion:10533435,Generation:0,CreationTimestamp:2020-05-14 12:34:07 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 32c1a7b2-95df-11ea-99e8-0242ac110002 0xc00186b8d7 0xc00186b8d8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-2tmb2 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-2tmb2,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-2tmb2 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00186b950} {node.kubernetes.io/unreachable Exists NoExecute 0xc00186b970}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 12:34:09 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 12:35:43 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 12:35:43 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 12:34:07 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.3,PodIP:10.244.1.91,StartTime:2020-05-14 12:34:09 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-05-14 12:35:42 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://2967c789bcbbac6b28b706a393f3892d6966495756b8d8506e5eb01a3cf018f2}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 14 12:36:51.405: INFO: Pod "nginx-deployment-85ddf47c5d-v7ngd" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-v7ngd,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-86x79,SelfLink:/api/v1/namespaces/e2e-tests-deployment-86x79/pods/nginx-deployment-85ddf47c5d-v7ngd,UID:8b1afd98-95df-11ea-99e8-0242ac110002,ResourceVersion:10533641,Generation:0,CreationTimestamp:2020-05-14 12:36:33 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 32c1a7b2-95df-11ea-99e8-0242ac110002 0xc00186ba37 0xc00186ba38}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-2tmb2 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-2tmb2,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-2tmb2 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00186bab0} {node.kubernetes.io/unreachable Exists NoExecute 0xc00186bad0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 12:36:33 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 14 12:36:51.405: INFO: Pod "nginx-deployment-85ddf47c5d-zq4nb" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-zq4nb,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-86x79,SelfLink:/api/v1/namespaces/e2e-tests-deployment-86x79/pods/nginx-deployment-85ddf47c5d-zq4nb,UID:8b8ecf70-95df-11ea-99e8-0242ac110002,ResourceVersion:10533661,Generation:0,CreationTimestamp:2020-05-14 12:36:33 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 32c1a7b2-95df-11ea-99e8-0242ac110002 0xc00186bb47 0xc00186bb48}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-2tmb2 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-2tmb2,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-2tmb2 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00186bc90} {node.kubernetes.io/unreachable Exists NoExecute 0xc00186bcb0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 12:36:34 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 14 12:36:51.406: INFO: Pod "nginx-deployment-85ddf47c5d-zqcs8" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-zqcs8,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-86x79,SelfLink:/api/v1/namespaces/e2e-tests-deployment-86x79/pods/nginx-deployment-85ddf47c5d-zqcs8,UID:34855663-95df-11ea-99e8-0242ac110002,ResourceVersion:10533450,Generation:0,CreationTimestamp:2020-05-14 12:34:07 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 32c1a7b2-95df-11ea-99e8-0242ac110002 0xc00186bd27 0xc00186bd28}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-2tmb2 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-2tmb2,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-2tmb2 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00186bda0} {node.kubernetes.io/unreachable Exists NoExecute 0xc00186bdc0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 12:34:10 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 12:35:43 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 12:35:43 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 12:34:08 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.4,PodIP:10.244.2.183,StartTime:2020-05-14 12:34:10 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-05-14 12:35:42 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://9f4792eb465bc32cad3a9c3faafcbdc84e7a0c2c7f9f326fbe8ad22c6ea0c192}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 14 12:36:51.406: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-deployment-86x79" for this suite. May 14 12:39:12.382: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 12:39:12.431: INFO: namespace: e2e-tests-deployment-86x79, resource: bindings, ignored listing per whitelist May 14 12:39:12.441: INFO: namespace e2e-tests-deployment-86x79 deletion completed in 2m16.265575428s • [SLOW TEST:314.267 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 14 12:39:12.442: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward api env vars May 14 12:39:17.832: INFO: Waiting up to 5m0s for pod "downward-api-ecaf5f8a-95df-11ea-9b22-0242ac110018" in namespace "e2e-tests-downward-api-2fnd9" to be "success or failure" May 14 12:39:17.837: INFO: Pod "downward-api-ecaf5f8a-95df-11ea-9b22-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 5.396459ms May 14 12:39:21.337: INFO: Pod "downward-api-ecaf5f8a-95df-11ea-9b22-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 3.505516207s May 14 12:39:23.341: INFO: Pod "downward-api-ecaf5f8a-95df-11ea-9b22-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 5.509350342s May 14 12:39:28.341: INFO: Pod "downward-api-ecaf5f8a-95df-11ea-9b22-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 10.509193102s May 14 12:39:30.398: INFO: Pod "downward-api-ecaf5f8a-95df-11ea-9b22-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 12.566508402s May 14 12:39:32.730: INFO: Pod "downward-api-ecaf5f8a-95df-11ea-9b22-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 14.898456624s May 14 12:39:34.734: INFO: Pod "downward-api-ecaf5f8a-95df-11ea-9b22-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 16.902156567s May 14 12:39:36.994: INFO: Pod "downward-api-ecaf5f8a-95df-11ea-9b22-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 19.162115085s May 14 12:39:39.367: INFO: Pod "downward-api-ecaf5f8a-95df-11ea-9b22-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 21.534959052s May 14 12:39:41.370: INFO: Pod "downward-api-ecaf5f8a-95df-11ea-9b22-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 23.53809271s May 14 12:39:43.373: INFO: Pod "downward-api-ecaf5f8a-95df-11ea-9b22-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 25.54145983s May 14 12:39:46.162: INFO: Pod "downward-api-ecaf5f8a-95df-11ea-9b22-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 28.329884827s May 14 12:39:48.165: INFO: Pod "downward-api-ecaf5f8a-95df-11ea-9b22-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 30.333004013s May 14 12:39:50.502: INFO: Pod "downward-api-ecaf5f8a-95df-11ea-9b22-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 32.670625359s May 14 12:39:54.341: INFO: Pod "downward-api-ecaf5f8a-95df-11ea-9b22-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 36.509227312s May 14 12:39:56.752: INFO: Pod "downward-api-ecaf5f8a-95df-11ea-9b22-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 38.919984046s May 14 12:39:58.964: INFO: Pod "downward-api-ecaf5f8a-95df-11ea-9b22-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 41.131991071s May 14 12:40:00.967: INFO: Pod "downward-api-ecaf5f8a-95df-11ea-9b22-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 43.134755407s May 14 12:40:02.969: INFO: Pod "downward-api-ecaf5f8a-95df-11ea-9b22-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 45.137413424s May 14 12:40:05.245: INFO: Pod "downward-api-ecaf5f8a-95df-11ea-9b22-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 47.413520354s May 14 12:40:07.455: INFO: Pod "downward-api-ecaf5f8a-95df-11ea-9b22-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 49.623326881s May 14 12:40:09.460: INFO: Pod "downward-api-ecaf5f8a-95df-11ea-9b22-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 51.627745786s May 14 12:40:11.463: INFO: Pod "downward-api-ecaf5f8a-95df-11ea-9b22-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 53.630836228s May 14 12:40:13.465: INFO: Pod "downward-api-ecaf5f8a-95df-11ea-9b22-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 55.633390558s May 14 12:40:15.887: INFO: Pod "downward-api-ecaf5f8a-95df-11ea-9b22-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 58.054907551s May 14 12:40:18.054: INFO: Pod "downward-api-ecaf5f8a-95df-11ea-9b22-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 1m0.222297234s May 14 12:40:20.449: INFO: Pod "downward-api-ecaf5f8a-95df-11ea-9b22-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 1m2.617464253s May 14 12:40:22.453: INFO: Pod "downward-api-ecaf5f8a-95df-11ea-9b22-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 1m4.62111747s May 14 12:40:24.457: INFO: Pod "downward-api-ecaf5f8a-95df-11ea-9b22-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 1m6.625289506s May 14 12:40:26.647: INFO: Pod "downward-api-ecaf5f8a-95df-11ea-9b22-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 1m8.815273336s May 14 12:40:28.650: INFO: Pod "downward-api-ecaf5f8a-95df-11ea-9b22-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 1m10.818296819s May 14 12:40:34.989: INFO: Pod "downward-api-ecaf5f8a-95df-11ea-9b22-0242ac110018": Phase="Running", Reason="", readiness=true. Elapsed: 1m17.157239033s May 14 12:40:37.533: INFO: Pod "downward-api-ecaf5f8a-95df-11ea-9b22-0242ac110018": Phase="Running", Reason="", readiness=true. Elapsed: 1m19.701573687s May 14 12:40:39.536: INFO: Pod "downward-api-ecaf5f8a-95df-11ea-9b22-0242ac110018": Phase="Running", Reason="", readiness=true. Elapsed: 1m21.704587374s May 14 12:40:42.591: INFO: Pod "downward-api-ecaf5f8a-95df-11ea-9b22-0242ac110018": Phase="Running", Reason="", readiness=true. Elapsed: 1m24.758925164s May 14 12:40:45.863: INFO: Pod "downward-api-ecaf5f8a-95df-11ea-9b22-0242ac110018": Phase="Running", Reason="", readiness=true. Elapsed: 1m28.031579487s May 14 12:40:48.790: INFO: Pod "downward-api-ecaf5f8a-95df-11ea-9b22-0242ac110018": Phase="Running", Reason="", readiness=true. Elapsed: 1m30.957666138s May 14 12:40:50.793: INFO: Pod "downward-api-ecaf5f8a-95df-11ea-9b22-0242ac110018": Phase="Running", Reason="", readiness=true. Elapsed: 1m32.960695056s May 14 12:40:52.796: INFO: Pod "downward-api-ecaf5f8a-95df-11ea-9b22-0242ac110018": Phase="Running", Reason="", readiness=true. Elapsed: 1m34.964157469s May 14 12:40:58.335: INFO: Pod "downward-api-ecaf5f8a-95df-11ea-9b22-0242ac110018": Phase="Running", Reason="", readiness=true. Elapsed: 1m40.503399791s May 14 12:41:02.582: INFO: Pod "downward-api-ecaf5f8a-95df-11ea-9b22-0242ac110018": Phase="Running", Reason="", readiness=true. Elapsed: 1m44.750507239s May 14 12:41:06.601: INFO: Pod "downward-api-ecaf5f8a-95df-11ea-9b22-0242ac110018": Phase="Running", Reason="", readiness=true. Elapsed: 1m48.769235424s May 14 12:41:10.434: INFO: Pod "downward-api-ecaf5f8a-95df-11ea-9b22-0242ac110018": Phase="Running", Reason="", readiness=true. Elapsed: 1m52.602243101s May 14 12:41:12.702: INFO: Pod "downward-api-ecaf5f8a-95df-11ea-9b22-0242ac110018": Phase="Running", Reason="", readiness=true. Elapsed: 1m54.870225568s May 14 12:41:15.625: INFO: Pod "downward-api-ecaf5f8a-95df-11ea-9b22-0242ac110018": Phase="Running", Reason="", readiness=true. Elapsed: 1m57.793558748s May 14 12:41:19.699: INFO: Pod "downward-api-ecaf5f8a-95df-11ea-9b22-0242ac110018": Phase="Running", Reason="", readiness=true. Elapsed: 2m1.866897976s May 14 12:41:22.745: INFO: Pod "downward-api-ecaf5f8a-95df-11ea-9b22-0242ac110018": Phase="Running", Reason="", readiness=true. Elapsed: 2m4.913449746s May 14 12:41:25.792: INFO: Pod "downward-api-ecaf5f8a-95df-11ea-9b22-0242ac110018": Phase="Running", Reason="", readiness=true. Elapsed: 2m7.959968574s May 14 12:41:27.967: INFO: Pod "downward-api-ecaf5f8a-95df-11ea-9b22-0242ac110018": Phase="Running", Reason="", readiness=true. Elapsed: 2m10.135520746s May 14 12:41:30.642: INFO: Pod "downward-api-ecaf5f8a-95df-11ea-9b22-0242ac110018": Phase="Running", Reason="", readiness=true. Elapsed: 2m12.810230058s May 14 12:41:32.645: INFO: Pod "downward-api-ecaf5f8a-95df-11ea-9b22-0242ac110018": Phase="Running", Reason="", readiness=true. Elapsed: 2m14.813579731s May 14 12:41:35.475: INFO: Pod "downward-api-ecaf5f8a-95df-11ea-9b22-0242ac110018": Phase="Running", Reason="", readiness=true. Elapsed: 2m17.64358438s May 14 12:41:38.632: INFO: Pod "downward-api-ecaf5f8a-95df-11ea-9b22-0242ac110018": Phase="Running", Reason="", readiness=true. Elapsed: 2m20.800140506s May 14 12:41:40.799: INFO: Pod "downward-api-ecaf5f8a-95df-11ea-9b22-0242ac110018": Phase="Running", Reason="", readiness=true. Elapsed: 2m22.967460723s May 14 12:41:42.802: INFO: Pod "downward-api-ecaf5f8a-95df-11ea-9b22-0242ac110018": Phase="Running", Reason="", readiness=true. Elapsed: 2m24.970230845s May 14 12:41:44.805: INFO: Pod "downward-api-ecaf5f8a-95df-11ea-9b22-0242ac110018": Phase="Running", Reason="", readiness=true. Elapsed: 2m26.973163704s May 14 12:41:46.808: INFO: Pod "downward-api-ecaf5f8a-95df-11ea-9b22-0242ac110018": Phase="Running", Reason="", readiness=true. Elapsed: 2m28.976201511s May 14 12:41:48.812: INFO: Pod "downward-api-ecaf5f8a-95df-11ea-9b22-0242ac110018": Phase="Running", Reason="", readiness=true. Elapsed: 2m30.979682773s May 14 12:41:50.831: INFO: Pod "downward-api-ecaf5f8a-95df-11ea-9b22-0242ac110018": Phase="Running", Reason="", readiness=true. Elapsed: 2m32.999045656s May 14 12:41:53.715: INFO: Pod "downward-api-ecaf5f8a-95df-11ea-9b22-0242ac110018": Phase="Running", Reason="", readiness=true. Elapsed: 2m35.882830559s May 14 12:41:57.207: INFO: Pod "downward-api-ecaf5f8a-95df-11ea-9b22-0242ac110018": Phase="Running", Reason="", readiness=true. Elapsed: 2m39.375557853s May 14 12:41:59.210: INFO: Pod "downward-api-ecaf5f8a-95df-11ea-9b22-0242ac110018": Phase="Running", Reason="", readiness=true. Elapsed: 2m41.378249267s May 14 12:42:01.215: INFO: Pod "downward-api-ecaf5f8a-95df-11ea-9b22-0242ac110018": Phase="Running", Reason="", readiness=true. Elapsed: 2m43.382880996s May 14 12:42:03.379: INFO: Pod "downward-api-ecaf5f8a-95df-11ea-9b22-0242ac110018": Phase="Running", Reason="", readiness=true. Elapsed: 2m45.547170529s May 14 12:42:06.900: INFO: Pod "downward-api-ecaf5f8a-95df-11ea-9b22-0242ac110018": Phase="Running", Reason="", readiness=true. Elapsed: 2m49.067940679s May 14 12:42:11.529: INFO: Pod "downward-api-ecaf5f8a-95df-11ea-9b22-0242ac110018": Phase="Running", Reason="", readiness=true. Elapsed: 2m53.696673785s May 14 12:42:13.532: INFO: Pod "downward-api-ecaf5f8a-95df-11ea-9b22-0242ac110018": Phase="Running", Reason="", readiness=true. Elapsed: 2m55.699984457s May 14 12:42:15.741: INFO: Pod "downward-api-ecaf5f8a-95df-11ea-9b22-0242ac110018": Phase="Running", Reason="", readiness=true. Elapsed: 2m57.909359618s May 14 12:42:17.744: INFO: Pod "downward-api-ecaf5f8a-95df-11ea-9b22-0242ac110018": Phase="Running", Reason="", readiness=true. Elapsed: 2m59.911981391s May 14 12:42:19.747: INFO: Pod "downward-api-ecaf5f8a-95df-11ea-9b22-0242ac110018": Phase="Running", Reason="", readiness=true. Elapsed: 3m1.915235399s May 14 12:42:21.750: INFO: Pod "downward-api-ecaf5f8a-95df-11ea-9b22-0242ac110018": Phase="Running", Reason="", readiness=true. Elapsed: 3m3.918268611s May 14 12:42:23.754: INFO: Pod "downward-api-ecaf5f8a-95df-11ea-9b22-0242ac110018": Phase="Running", Reason="", readiness=true. Elapsed: 3m5.922123193s May 14 12:42:26.638: INFO: Pod "downward-api-ecaf5f8a-95df-11ea-9b22-0242ac110018": Phase="Running", Reason="", readiness=true. Elapsed: 3m8.806081898s May 14 12:42:28.641: INFO: Pod "downward-api-ecaf5f8a-95df-11ea-9b22-0242ac110018": Phase="Running", Reason="", readiness=true. Elapsed: 3m10.809384016s May 14 12:42:30.644: INFO: Pod "downward-api-ecaf5f8a-95df-11ea-9b22-0242ac110018": Phase="Running", Reason="", readiness=true. Elapsed: 3m12.812202211s May 14 12:42:34.344: INFO: Pod "downward-api-ecaf5f8a-95df-11ea-9b22-0242ac110018": Phase="Running", Reason="", readiness=true. Elapsed: 3m16.512294263s May 14 12:42:36.679: INFO: Pod "downward-api-ecaf5f8a-95df-11ea-9b22-0242ac110018": Phase="Running", Reason="", readiness=true. Elapsed: 3m18.847555899s May 14 12:42:38.683: INFO: Pod "downward-api-ecaf5f8a-95df-11ea-9b22-0242ac110018": Phase="Running", Reason="", readiness=true. Elapsed: 3m20.851020064s May 14 12:42:40.703: INFO: Pod "downward-api-ecaf5f8a-95df-11ea-9b22-0242ac110018": Phase="Running", Reason="", readiness=true. Elapsed: 3m22.871093762s May 14 12:42:43.673: INFO: Pod "downward-api-ecaf5f8a-95df-11ea-9b22-0242ac110018": Phase="Running", Reason="", readiness=true. Elapsed: 3m25.841439883s May 14 12:42:46.947: INFO: Pod "downward-api-ecaf5f8a-95df-11ea-9b22-0242ac110018": Phase="Running", Reason="", readiness=true. Elapsed: 3m29.115339852s May 14 12:42:48.952: INFO: Pod "downward-api-ecaf5f8a-95df-11ea-9b22-0242ac110018": Phase="Running", Reason="", readiness=true. Elapsed: 3m31.119995552s May 14 12:42:51.120: INFO: Pod "downward-api-ecaf5f8a-95df-11ea-9b22-0242ac110018": Phase="Running", Reason="", readiness=true. Elapsed: 3m33.288066795s May 14 12:42:53.123: INFO: Pod "downward-api-ecaf5f8a-95df-11ea-9b22-0242ac110018": Phase="Running", Reason="", readiness=true. Elapsed: 3m35.291482854s May 14 12:42:55.126: INFO: Pod "downward-api-ecaf5f8a-95df-11ea-9b22-0242ac110018": Phase="Running", Reason="", readiness=true. Elapsed: 3m37.294277467s May 14 12:42:57.219: INFO: Pod "downward-api-ecaf5f8a-95df-11ea-9b22-0242ac110018": Phase="Running", Reason="", readiness=true. Elapsed: 3m39.386890763s May 14 12:42:59.320: INFO: Pod "downward-api-ecaf5f8a-95df-11ea-9b22-0242ac110018": Phase="Running", Reason="", readiness=true. Elapsed: 3m41.48814425s May 14 12:43:01.395: INFO: Pod "downward-api-ecaf5f8a-95df-11ea-9b22-0242ac110018": Phase="Running", Reason="", readiness=true. Elapsed: 3m43.562777657s May 14 12:43:03.847: INFO: Pod "downward-api-ecaf5f8a-95df-11ea-9b22-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 3m46.015002706s STEP: Saw pod success May 14 12:43:03.847: INFO: Pod "downward-api-ecaf5f8a-95df-11ea-9b22-0242ac110018" satisfied condition "success or failure" May 14 12:43:03.849: INFO: Trying to get logs from node hunter-worker pod downward-api-ecaf5f8a-95df-11ea-9b22-0242ac110018 container dapi-container: STEP: delete the pod May 14 12:43:04.646: INFO: Waiting for pod downward-api-ecaf5f8a-95df-11ea-9b22-0242ac110018 to disappear May 14 12:43:04.859: INFO: Pod downward-api-ecaf5f8a-95df-11ea-9b22-0242ac110018 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 14 12:43:04.859: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-2fnd9" for this suite. May 14 12:43:11.098: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 12:43:11.133: INFO: namespace: e2e-tests-downward-api-2fnd9, resource: bindings, ignored listing per whitelist May 14 12:43:11.153: INFO: namespace e2e-tests-downward-api-2fnd9 deletion completed in 6.289684164s • [SLOW TEST:238.712 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38 should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 14 12:43:11.154: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 14 12:43:11.442: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version --client' May 14 12:43:11.525: INFO: stderr: "" May 14 12:43:11.525: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"13\", GitVersion:\"v1.13.12\", GitCommit:\"a8b52209ee172232b6db7a6e0ce2adc77458829f\", GitTreeState:\"clean\", BuildDate:\"2020-05-02T15:37:06Z\", GoVersion:\"go1.11.13\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" May 14 12:43:11.527: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-cvrzq' May 14 12:43:27.417: INFO: stderr: "" May 14 12:43:27.417: INFO: stdout: "replicationcontroller/redis-master created\n" May 14 12:43:27.417: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-cvrzq' May 14 12:43:27.765: INFO: stderr: "" May 14 12:43:27.765: INFO: stdout: "service/redis-master created\n" STEP: Waiting for Redis master to start. May 14 12:43:28.818: INFO: Selector matched 1 pods for map[app:redis] May 14 12:43:28.818: INFO: Found 0 / 1 May 14 12:43:29.769: INFO: Selector matched 1 pods for map[app:redis] May 14 12:43:29.769: INFO: Found 0 / 1 May 14 12:43:31.494: INFO: Selector matched 1 pods for map[app:redis] May 14 12:43:31.494: INFO: Found 0 / 1 May 14 12:43:31.830: INFO: Selector matched 1 pods for map[app:redis] May 14 12:43:31.830: INFO: Found 0 / 1 May 14 12:43:33.071: INFO: Selector matched 1 pods for map[app:redis] May 14 12:43:33.071: INFO: Found 0 / 1 May 14 12:43:34.254: INFO: Selector matched 1 pods for map[app:redis] May 14 12:43:34.254: INFO: Found 0 / 1 May 14 12:43:35.538: INFO: Selector matched 1 pods for map[app:redis] May 14 12:43:35.538: INFO: Found 0 / 1 May 14 12:43:35.768: INFO: Selector matched 1 pods for map[app:redis] May 14 12:43:35.768: INFO: Found 0 / 1 May 14 12:43:37.688: INFO: Selector matched 1 pods for map[app:redis] May 14 12:43:37.688: INFO: Found 0 / 1 May 14 12:43:39.224: INFO: Selector matched 1 pods for map[app:redis] May 14 12:43:39.224: INFO: Found 0 / 1 May 14 12:43:39.909: INFO: Selector matched 1 pods for map[app:redis] May 14 12:43:39.909: INFO: Found 0 / 1 May 14 12:43:40.770: INFO: Selector matched 1 pods for map[app:redis] May 14 12:43:40.770: INFO: Found 0 / 1 May 14 12:43:44.383: INFO: Selector matched 1 pods for map[app:redis] May 14 12:43:44.383: INFO: Found 0 / 1 May 14 12:43:46.816: INFO: Selector matched 1 pods for map[app:redis] May 14 12:43:46.816: INFO: Found 0 / 1 May 14 12:43:52.246: INFO: Selector matched 1 pods for map[app:redis] May 14 12:43:52.246: INFO: Found 0 / 1 May 14 12:43:53.764: INFO: Selector matched 1 pods for map[app:redis] May 14 12:43:53.764: INFO: Found 0 / 1 May 14 12:43:53.768: INFO: Selector matched 1 pods for map[app:redis] May 14 12:43:53.768: INFO: Found 0 / 1 May 14 12:43:55.309: INFO: Selector matched 1 pods for map[app:redis] May 14 12:43:55.309: INFO: Found 0 / 1 May 14 12:43:56.608: INFO: Selector matched 1 pods for map[app:redis] May 14 12:43:56.608: INFO: Found 0 / 1 May 14 12:43:56.974: INFO: Selector matched 1 pods for map[app:redis] May 14 12:43:56.974: INFO: Found 0 / 1 May 14 12:43:58.255: INFO: Selector matched 1 pods for map[app:redis] May 14 12:43:58.255: INFO: Found 0 / 1 May 14 12:43:59.722: INFO: Selector matched 1 pods for map[app:redis] May 14 12:43:59.722: INFO: Found 0 / 1 May 14 12:44:00.219: INFO: Selector matched 1 pods for map[app:redis] May 14 12:44:00.219: INFO: Found 0 / 1 May 14 12:44:00.968: INFO: Selector matched 1 pods for map[app:redis] May 14 12:44:00.968: INFO: Found 0 / 1 May 14 12:44:01.956: INFO: Selector matched 1 pods for map[app:redis] May 14 12:44:01.956: INFO: Found 0 / 1 May 14 12:44:02.845: INFO: Selector matched 1 pods for map[app:redis] May 14 12:44:02.846: INFO: Found 0 / 1 May 14 12:44:03.769: INFO: Selector matched 1 pods for map[app:redis] May 14 12:44:03.769: INFO: Found 0 / 1 May 14 12:44:05.500: INFO: Selector matched 1 pods for map[app:redis] May 14 12:44:05.500: INFO: Found 0 / 1 May 14 12:44:06.487: INFO: Selector matched 1 pods for map[app:redis] May 14 12:44:06.487: INFO: Found 0 / 1 May 14 12:44:08.861: INFO: Selector matched 1 pods for map[app:redis] May 14 12:44:08.861: INFO: Found 0 / 1 May 14 12:44:09.903: INFO: Selector matched 1 pods for map[app:redis] May 14 12:44:09.903: INFO: Found 0 / 1 May 14 12:44:10.770: INFO: Selector matched 1 pods for map[app:redis] May 14 12:44:10.770: INFO: Found 0 / 1 May 14 12:44:11.837: INFO: Selector matched 1 pods for map[app:redis] May 14 12:44:11.837: INFO: Found 0 / 1 May 14 12:44:12.768: INFO: Selector matched 1 pods for map[app:redis] May 14 12:44:12.768: INFO: Found 0 / 1 May 14 12:44:13.777: INFO: Selector matched 1 pods for map[app:redis] May 14 12:44:13.777: INFO: Found 1 / 1 May 14 12:44:13.777: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 May 14 12:44:13.780: INFO: Selector matched 1 pods for map[app:redis] May 14 12:44:13.780: INFO: ForEach: Found 1 pods from the filter. Now looping through them. May 14 12:44:13.780: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe pod redis-master-4thj8 --namespace=e2e-tests-kubectl-cvrzq' May 14 12:44:13.880: INFO: stderr: "" May 14 12:44:13.880: INFO: stdout: "Name: redis-master-4thj8\nNamespace: e2e-tests-kubectl-cvrzq\nPriority: 0\nPriorityClassName: \nNode: hunter-worker2/172.17.0.4\nStart Time: Thu, 14 May 2020 12:43:27 +0000\nLabels: app=redis\n role=master\nAnnotations: \nStatus: Running\nIP: 10.244.2.200\nControlled By: ReplicationController/redis-master\nContainers:\n redis-master:\n Container ID: containerd://58e6d92db3c6c5714ed630a6d9a14f5da22aac20822a49050964d45da07916d6\n Image: gcr.io/kubernetes-e2e-test-images/redis:1.0\n Image ID: gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830\n Port: 6379/TCP\n Host Port: 0/TCP\n State: Running\n Started: Thu, 14 May 2020 12:44:12 +0000\n Ready: True\n Restart Count: 0\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from default-token-496rn (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n default-token-496rn:\n Type: Secret (a volume populated by a Secret)\n SecretName: default-token-496rn\n Optional: false\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute for 300s\n node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 46s default-scheduler Successfully assigned e2e-tests-kubectl-cvrzq/redis-master-4thj8 to hunter-worker2\n Normal Pulled 45s kubelet, hunter-worker2 Container image \"gcr.io/kubernetes-e2e-test-images/redis:1.0\" already present on machine\n Normal Created 1s kubelet, hunter-worker2 Created container\n Normal Started 1s kubelet, hunter-worker2 Started container\n" May 14 12:44:13.880: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe rc redis-master --namespace=e2e-tests-kubectl-cvrzq' May 14 12:44:13.988: INFO: stderr: "" May 14 12:44:13.988: INFO: stdout: "Name: redis-master\nNamespace: e2e-tests-kubectl-cvrzq\nSelector: app=redis,role=master\nLabels: app=redis\n role=master\nAnnotations: \nReplicas: 1 current / 1 desired\nPods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: app=redis\n role=master\n Containers:\n redis-master:\n Image: gcr.io/kubernetes-e2e-test-images/redis:1.0\n Port: 6379/TCP\n Host Port: 0/TCP\n Environment: \n Mounts: \n Volumes: \nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 46s replication-controller Created pod: redis-master-4thj8\n" May 14 12:44:13.988: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe service redis-master --namespace=e2e-tests-kubectl-cvrzq' May 14 12:44:14.087: INFO: stderr: "" May 14 12:44:14.087: INFO: stdout: "Name: redis-master\nNamespace: e2e-tests-kubectl-cvrzq\nLabels: app=redis\n role=master\nAnnotations: \nSelector: app=redis,role=master\nType: ClusterIP\nIP: 10.100.187.214\nPort: 6379/TCP\nTargetPort: redis-server/TCP\nEndpoints: 10.244.2.200:6379\nSession Affinity: None\nEvents: \n" May 14 12:44:14.090: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe node hunter-control-plane' May 14 12:44:14.221: INFO: stderr: "" May 14 12:44:14.221: INFO: stdout: "Name: hunter-control-plane\nRoles: master\nLabels: beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/os=linux\n kubernetes.io/hostname=hunter-control-plane\n node-role.kubernetes.io/master=\nAnnotations: kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock\n node.alpha.kubernetes.io/ttl: 0\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Sun, 15 Mar 2020 18:22:50 +0000\nTaints: node-role.kubernetes.io/master:NoSchedule\nUnschedulable: false\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n MemoryPressure False Thu, 14 May 2020 12:44:13 +0000 Sun, 15 Mar 2020 18:22:49 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Thu, 14 May 2020 12:44:13 +0000 Sun, 15 Mar 2020 18:22:49 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Thu, 14 May 2020 12:44:13 +0000 Sun, 15 Mar 2020 18:22:49 +0000 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Thu, 14 May 2020 12:44:13 +0000 Sun, 15 Mar 2020 18:23:41 +0000 KubeletReady kubelet is posting ready status\nAddresses:\n InternalIP: 172.17.0.2\n Hostname: hunter-control-plane\nCapacity:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759892Ki\n pods: 110\nAllocatable:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759892Ki\n pods: 110\nSystem Info:\n Machine ID: 3c4716968dac483293a23c2100ad64a5\n System UUID: 683417f7-64ca-431d-b8ac-22e73b26255e\n Boot ID: ca2aa731-f890-4956-92a1-ff8c7560d571\n Kernel Version: 4.15.0-88-generic\n OS Image: Ubuntu 19.10\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: containerd://1.3.2\n Kubelet Version: v1.13.12\n Kube-Proxy Version: v1.13.12\nPodCIDR: 10.244.0.0/24\nNon-terminated Pods: (7 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE\n --------- ---- ------------ ---------- --------------- ------------- ---\n kube-system etcd-hunter-control-plane 0 (0%) 0 (0%) 0 (0%) 0 (0%) 59d\n kube-system kindnet-l2xm6 100m (0%) 100m (0%) 50Mi (0%) 50Mi (0%) 59d\n kube-system kube-apiserver-hunter-control-plane 250m (1%) 0 (0%) 0 (0%) 0 (0%) 59d\n kube-system kube-controller-manager-hunter-control-plane 200m (1%) 0 (0%) 0 (0%) 0 (0%) 59d\n kube-system kube-proxy-mmppc 0 (0%) 0 (0%) 0 (0%) 0 (0%) 59d\n kube-system kube-scheduler-hunter-control-plane 100m (0%) 0 (0%) 0 (0%) 0 (0%) 59d\n local-path-storage local-path-provisioner-77cfdd744c-q47vg 0 (0%) 0 (0%) 0 (0%) 0 (0%) 59d\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 650m (4%) 100m (0%)\n memory 50Mi (0%) 50Mi (0%)\n ephemeral-storage 0 (0%) 0 (0%)\nEvents: \n" May 14 12:44:14.221: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe namespace e2e-tests-kubectl-cvrzq' May 14 12:44:14.316: INFO: stderr: "" May 14 12:44:14.316: INFO: stdout: "Name: e2e-tests-kubectl-cvrzq\nLabels: e2e-framework=kubectl\n e2e-run=399b812e-95d0-11ea-9b22-0242ac110018\nAnnotations: \nStatus: Active\n\nNo resource quota.\n\nNo resource limits.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 14 12:44:14.316: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-cvrzq" for this suite. May 14 12:44:58.334: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 12:44:58.360: INFO: namespace: e2e-tests-kubectl-cvrzq, resource: bindings, ignored listing per whitelist May 14 12:44:58.391: INFO: namespace e2e-tests-kubectl-cvrzq deletion completed in 44.073506434s • [SLOW TEST:107.238 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl describe /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Guestbook application should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 14 12:44:58.392: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating all guestbook components May 14 12:44:59.294: INFO: apiVersion: v1 kind: Service metadata: name: redis-slave labels: app: redis role: slave tier: backend spec: ports: - port: 6379 selector: app: redis role: slave tier: backend May 14 12:44:59.294: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-gzzq4' May 14 12:44:59.776: INFO: stderr: "" May 14 12:44:59.776: INFO: stdout: "service/redis-slave created\n" May 14 12:44:59.776: INFO: apiVersion: v1 kind: Service metadata: name: redis-master labels: app: redis role: master tier: backend spec: ports: - port: 6379 targetPort: 6379 selector: app: redis role: master tier: backend May 14 12:44:59.776: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-gzzq4' May 14 12:45:00.981: INFO: stderr: "" May 14 12:45:00.981: INFO: stdout: "service/redis-master created\n" May 14 12:45:00.981: INFO: apiVersion: v1 kind: Service metadata: name: frontend labels: app: guestbook tier: frontend spec: # if your cluster supports it, uncomment the following to automatically create # an external load-balanced IP for the frontend service. # type: LoadBalancer ports: - port: 80 selector: app: guestbook tier: frontend May 14 12:45:00.981: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-gzzq4' May 14 12:45:01.390: INFO: stderr: "" May 14 12:45:01.390: INFO: stdout: "service/frontend created\n" May 14 12:45:01.390: INFO: apiVersion: extensions/v1beta1 kind: Deployment metadata: name: frontend spec: replicas: 3 template: metadata: labels: app: guestbook tier: frontend spec: containers: - name: php-redis image: gcr.io/google-samples/gb-frontend:v6 resources: requests: cpu: 100m memory: 100Mi env: - name: GET_HOSTS_FROM value: dns # If your cluster config does not include a dns service, then to # instead access environment variables to find service host # info, comment out the 'value: dns' line above, and uncomment the # line below: # value: env ports: - containerPort: 80 May 14 12:45:01.390: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-gzzq4' May 14 12:45:01.708: INFO: stderr: "" May 14 12:45:01.708: INFO: stdout: "deployment.extensions/frontend created\n" May 14 12:45:01.708: INFO: apiVersion: extensions/v1beta1 kind: Deployment metadata: name: redis-master spec: replicas: 1 template: metadata: labels: app: redis role: master tier: backend spec: containers: - name: master image: gcr.io/kubernetes-e2e-test-images/redis:1.0 resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 May 14 12:45:01.708: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-gzzq4' May 14 12:45:02.119: INFO: stderr: "" May 14 12:45:02.119: INFO: stdout: "deployment.extensions/redis-master created\n" May 14 12:45:02.119: INFO: apiVersion: extensions/v1beta1 kind: Deployment metadata: name: redis-slave spec: replicas: 2 template: metadata: labels: app: redis role: slave tier: backend spec: containers: - name: slave image: gcr.io/google-samples/gb-redisslave:v3 resources: requests: cpu: 100m memory: 100Mi env: - name: GET_HOSTS_FROM value: dns # If your cluster config does not include a dns service, then to # instead access an environment variable to find the master # service's host, comment out the 'value: dns' line above, and # uncomment the line below: # value: env ports: - containerPort: 6379 May 14 12:45:02.119: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-gzzq4' May 14 12:45:02.565: INFO: stderr: "" May 14 12:45:02.566: INFO: stdout: "deployment.extensions/redis-slave created\n" STEP: validating guestbook app May 14 12:45:02.566: INFO: Waiting for all frontend pods to be Running. May 14 12:45:32.617: INFO: Waiting for frontend to serve content. May 14 12:45:32.943: INFO: Trying to add a new entry to the guestbook. May 14 12:45:33.160: INFO: Verifying that added entry can be retrieved. STEP: using delete to clean up resources May 14 12:45:33.185: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-gzzq4' May 14 12:45:33.642: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 14 12:45:33.642: INFO: stdout: "service \"redis-slave\" force deleted\n" STEP: using delete to clean up resources May 14 12:45:33.642: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-gzzq4' May 14 12:45:34.981: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 14 12:45:34.981: INFO: stdout: "service \"redis-master\" force deleted\n" STEP: using delete to clean up resources May 14 12:45:34.982: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-gzzq4' May 14 12:45:36.508: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 14 12:45:36.508: INFO: stdout: "service \"frontend\" force deleted\n" STEP: using delete to clean up resources May 14 12:45:36.508: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-gzzq4' May 14 12:45:36.647: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 14 12:45:36.647: INFO: stdout: "deployment.extensions \"frontend\" force deleted\n" STEP: using delete to clean up resources May 14 12:45:36.648: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-gzzq4' May 14 12:45:38.933: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 14 12:45:38.933: INFO: stdout: "deployment.extensions \"redis-master\" force deleted\n" STEP: using delete to clean up resources May 14 12:45:38.934: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-gzzq4' May 14 12:45:40.344: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 14 12:45:40.344: INFO: stdout: "deployment.extensions \"redis-slave\" force deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 14 12:45:40.344: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-gzzq4" for this suite. May 14 12:46:57.474: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 12:46:57.567: INFO: namespace: e2e-tests-kubectl-gzzq4, resource: bindings, ignored listing per whitelist May 14 12:46:57.577: INFO: namespace e2e-tests-kubectl-gzzq4 deletion completed in 1m16.794358577s • [SLOW TEST:119.186 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Guestbook application /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] HostPath should give a volume the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 14 12:46:57.577: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename hostpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37 [It] should give a volume the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test hostPath mode May 14 12:46:57.683: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "e2e-tests-hostpath-x4h88" to be "success or failure" May 14 12:46:57.703: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 20.129334ms May 14 12:46:59.737: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.053939281s May 14 12:47:02.180: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.4975024s May 14 12:47:04.184: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 6.500842695s May 14 12:47:06.767: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 9.084548186s May 14 12:47:09.284: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 11.600785129s May 14 12:47:11.286: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 13.603108474s May 14 12:47:13.311: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 15.628468756s May 14 12:47:15.468: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 17.785360688s May 14 12:47:17.546: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 19.863199492s May 14 12:47:19.630: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 21.947300163s May 14 12:47:21.744: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 24.061020171s May 14 12:47:24.188: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 26.504772108s May 14 12:47:27.844: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 30.161594584s May 14 12:47:29.850: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 32.166756748s May 14 12:47:31.853: INFO: Pod "pod-host-path-test": Phase="Running", Reason="", readiness=false. Elapsed: 34.170395809s May 14 12:47:33.857: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 36.173715067s STEP: Saw pod success May 14 12:47:33.857: INFO: Pod "pod-host-path-test" satisfied condition "success or failure" May 14 12:47:33.859: INFO: Trying to get logs from node hunter-worker2 pod pod-host-path-test container test-container-1: STEP: delete the pod May 14 12:47:33.881: INFO: Waiting for pod pod-host-path-test to disappear May 14 12:47:33.887: INFO: Pod pod-host-path-test no longer exists [AfterEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 14 12:47:33.887: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-hostpath-x4h88" for this suite. May 14 12:47:39.912: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 12:47:39.948: INFO: namespace: e2e-tests-hostpath-x4h88, resource: bindings, ignored listing per whitelist May 14 12:47:40.024: INFO: namespace e2e-tests-hostpath-x4h88 deletion completed in 6.134931134s • [SLOW TEST:42.447 seconds] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34 should give a volume the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 14 12:47:40.024: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0666 on tmpfs May 14 12:47:40.169: INFO: Waiting up to 5m0s for pod "pod-18a0bd09-95e1-11ea-9b22-0242ac110018" in namespace "e2e-tests-emptydir-c264j" to be "success or failure" May 14 12:47:40.174: INFO: Pod "pod-18a0bd09-95e1-11ea-9b22-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 4.427081ms May 14 12:47:42.177: INFO: Pod "pod-18a0bd09-95e1-11ea-9b22-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007676839s May 14 12:47:44.180: INFO: Pod "pod-18a0bd09-95e1-11ea-9b22-0242ac110018": Phase="Running", Reason="", readiness=true. Elapsed: 4.011071638s May 14 12:47:46.184: INFO: Pod "pod-18a0bd09-95e1-11ea-9b22-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.014665047s STEP: Saw pod success May 14 12:47:46.184: INFO: Pod "pod-18a0bd09-95e1-11ea-9b22-0242ac110018" satisfied condition "success or failure" May 14 12:47:46.187: INFO: Trying to get logs from node hunter-worker pod pod-18a0bd09-95e1-11ea-9b22-0242ac110018 container test-container: STEP: delete the pod May 14 12:47:46.205: INFO: Waiting for pod pod-18a0bd09-95e1-11ea-9b22-0242ac110018 to disappear May 14 12:47:46.210: INFO: Pod pod-18a0bd09-95e1-11ea-9b22-0242ac110018 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 14 12:47:46.210: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-c264j" for this suite. May 14 12:47:52.225: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 12:47:52.320: INFO: namespace: e2e-tests-emptydir-c264j, resource: bindings, ignored listing per whitelist May 14 12:47:52.346: INFO: namespace e2e-tests-emptydir-c264j deletion completed in 6.133450306s • [SLOW TEST:12.322 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 14 12:47:52.346: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-upd-1ff7fd3b-95e1-11ea-9b22-0242ac110018 STEP: Creating the pod STEP: Updating configmap configmap-test-upd-1ff7fd3b-95e1-11ea-9b22-0242ac110018 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 14 12:47:58.610: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-7j7ph" for this suite. May 14 12:48:18.627: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 12:48:18.668: INFO: namespace: e2e-tests-configmap-7j7ph, resource: bindings, ignored listing per whitelist May 14 12:48:18.705: INFO: namespace e2e-tests-configmap-7j7ph deletion completed in 20.086143958s • [SLOW TEST:26.359 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 14 12:48:18.706: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name s-test-opt-del-2fab9f65-95e1-11ea-9b22-0242ac110018 STEP: Creating secret with name s-test-opt-upd-2fab9fbd-95e1-11ea-9b22-0242ac110018 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-2fab9f65-95e1-11ea-9b22-0242ac110018 STEP: Updating secret s-test-opt-upd-2fab9fbd-95e1-11ea-9b22-0242ac110018 STEP: Creating secret with name s-test-opt-create-2fab9fe9-95e1-11ea-9b22-0242ac110018 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 14 12:49:59.356: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-wt4jl" for this suite. May 14 12:50:23.392: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 12:50:23.475: INFO: namespace: e2e-tests-secrets-wt4jl, resource: bindings, ignored listing per whitelist May 14 12:50:23.484: INFO: namespace e2e-tests-secrets-wt4jl deletion completed in 24.124770015s • [SLOW TEST:124.778 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 14 12:50:23.484: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:204 [It] should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying QOS class is set on the pod [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 14 12:50:23.642: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-kfhxn" for this suite. May 14 12:50:45.745: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 12:50:45.762: INFO: namespace: e2e-tests-pods-kfhxn, resource: bindings, ignored listing per whitelist May 14 12:50:45.805: INFO: namespace e2e-tests-pods-kfhxn deletion completed in 22.157139826s • [SLOW TEST:22.321 seconds] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 14 12:50:45.806: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-qdps2 May 14 12:50:49.921: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-qdps2 STEP: checking the pod's current state and verifying that restartCount is present May 14 12:50:49.924: INFO: Initial restart count of pod liveness-http is 0 May 14 12:51:09.966: INFO: Restart count of pod e2e-tests-container-probe-qdps2/liveness-http is now 1 (20.042255376s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 14 12:51:10.044: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-qdps2" for this suite. May 14 12:51:16.066: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 12:51:16.171: INFO: namespace: e2e-tests-container-probe-qdps2, resource: bindings, ignored listing per whitelist May 14 12:51:16.175: INFO: namespace e2e-tests-container-probe-qdps2 deletion completed in 6.123526748s • [SLOW TEST:30.369 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 14 12:51:16.175: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward api env vars May 14 12:51:16.321: INFO: Waiting up to 5m0s for pod "downward-api-9977c4af-95e1-11ea-9b22-0242ac110018" in namespace "e2e-tests-downward-api-crdk8" to be "success or failure" May 14 12:51:16.355: INFO: Pod "downward-api-9977c4af-95e1-11ea-9b22-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 33.956308ms May 14 12:51:18.358: INFO: Pod "downward-api-9977c4af-95e1-11ea-9b22-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.037040435s May 14 12:51:20.363: INFO: Pod "downward-api-9977c4af-95e1-11ea-9b22-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.041721628s STEP: Saw pod success May 14 12:51:20.363: INFO: Pod "downward-api-9977c4af-95e1-11ea-9b22-0242ac110018" satisfied condition "success or failure" May 14 12:51:20.366: INFO: Trying to get logs from node hunter-worker2 pod downward-api-9977c4af-95e1-11ea-9b22-0242ac110018 container dapi-container: STEP: delete the pod May 14 12:51:20.406: INFO: Waiting for pod downward-api-9977c4af-95e1-11ea-9b22-0242ac110018 to disappear May 14 12:51:20.422: INFO: Pod downward-api-9977c4af-95e1-11ea-9b22-0242ac110018 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 14 12:51:20.422: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-crdk8" for this suite. May 14 12:51:26.479: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 12:51:26.528: INFO: namespace: e2e-tests-downward-api-crdk8, resource: bindings, ignored listing per whitelist May 14 12:51:26.559: INFO: namespace e2e-tests-downward-api-crdk8 deletion completed in 6.102055004s • [SLOW TEST:10.384 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38 should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 14 12:51:26.559: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook May 14 12:51:34.755: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 14 12:51:34.763: INFO: Pod pod-with-prestop-exec-hook still exists May 14 12:51:36.763: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 14 12:51:36.768: INFO: Pod pod-with-prestop-exec-hook still exists May 14 12:51:38.763: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 14 12:51:38.768: INFO: Pod pod-with-prestop-exec-hook still exists May 14 12:51:40.763: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 14 12:51:40.767: INFO: Pod pod-with-prestop-exec-hook still exists May 14 12:51:42.765: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 14 12:51:42.770: INFO: Pod pod-with-prestop-exec-hook still exists May 14 12:51:44.763: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 14 12:51:44.768: INFO: Pod pod-with-prestop-exec-hook still exists May 14 12:51:46.763: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 14 12:51:46.767: INFO: Pod pod-with-prestop-exec-hook still exists May 14 12:51:48.763: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 14 12:51:48.767: INFO: Pod pod-with-prestop-exec-hook still exists May 14 12:51:50.763: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 14 12:51:50.768: INFO: Pod pod-with-prestop-exec-hook still exists May 14 12:51:52.763: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 14 12:51:52.787: INFO: Pod pod-with-prestop-exec-hook still exists May 14 12:51:54.763: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 14 12:51:54.768: INFO: Pod pod-with-prestop-exec-hook still exists May 14 12:51:56.763: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 14 12:51:56.768: INFO: Pod pod-with-prestop-exec-hook still exists May 14 12:51:58.763: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 14 12:51:58.768: INFO: Pod pod-with-prestop-exec-hook still exists May 14 12:52:00.763: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 14 12:52:00.767: INFO: Pod pod-with-prestop-exec-hook still exists May 14 12:52:02.763: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 14 12:52:02.767: INFO: Pod pod-with-prestop-exec-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 14 12:52:02.775: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-z6rfv" for this suite. May 14 12:52:24.856: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 12:52:24.909: INFO: namespace: e2e-tests-container-lifecycle-hook-z6rfv, resource: bindings, ignored listing per whitelist May 14 12:52:24.912: INFO: namespace e2e-tests-container-lifecycle-hook-z6rfv deletion completed in 22.133389648s • [SLOW TEST:58.353 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40 should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 14 12:52:24.912: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74 STEP: Creating service test in namespace e2e-tests-statefulset-9x9rm [It] Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating stateful set ss in namespace e2e-tests-statefulset-9x9rm STEP: Waiting until all stateful set ss replicas will be running in namespace e2e-tests-statefulset-9x9rm May 14 12:52:25.428: INFO: Found 0 stateful pods, waiting for 1 May 14 12:52:35.432: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod May 14 12:52:35.436: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-9x9rm ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' May 14 12:52:35.696: INFO: stderr: "I0514 12:52:35.572515 3335 log.go:172] (0xc00082a2c0) (0xc000730640) Create stream\nI0514 12:52:35.572562 3335 log.go:172] (0xc00082a2c0) (0xc000730640) Stream added, broadcasting: 1\nI0514 12:52:35.574815 3335 log.go:172] (0xc00082a2c0) Reply frame received for 1\nI0514 12:52:35.574865 3335 log.go:172] (0xc00082a2c0) (0xc0005c2be0) Create stream\nI0514 12:52:35.574886 3335 log.go:172] (0xc00082a2c0) (0xc0005c2be0) Stream added, broadcasting: 3\nI0514 12:52:35.575722 3335 log.go:172] (0xc00082a2c0) Reply frame received for 3\nI0514 12:52:35.575760 3335 log.go:172] (0xc00082a2c0) (0xc0007306e0) Create stream\nI0514 12:52:35.575771 3335 log.go:172] (0xc00082a2c0) (0xc0007306e0) Stream added, broadcasting: 5\nI0514 12:52:35.576413 3335 log.go:172] (0xc00082a2c0) Reply frame received for 5\nI0514 12:52:35.691059 3335 log.go:172] (0xc00082a2c0) Data frame received for 3\nI0514 12:52:35.691084 3335 log.go:172] (0xc0005c2be0) (3) Data frame handling\nI0514 12:52:35.691100 3335 log.go:172] (0xc0005c2be0) (3) Data frame sent\nI0514 12:52:35.691222 3335 log.go:172] (0xc00082a2c0) Data frame received for 5\nI0514 12:52:35.691238 3335 log.go:172] (0xc0007306e0) (5) Data frame handling\nI0514 12:52:35.691261 3335 log.go:172] (0xc00082a2c0) Data frame received for 3\nI0514 12:52:35.691274 3335 log.go:172] (0xc0005c2be0) (3) Data frame handling\nI0514 12:52:35.692164 3335 log.go:172] (0xc00082a2c0) Data frame received for 1\nI0514 12:52:35.692177 3335 log.go:172] (0xc000730640) (1) Data frame handling\nI0514 12:52:35.692187 3335 log.go:172] (0xc000730640) (1) Data frame sent\nI0514 12:52:35.692573 3335 log.go:172] (0xc00082a2c0) (0xc000730640) Stream removed, broadcasting: 1\nI0514 12:52:35.692703 3335 log.go:172] (0xc00082a2c0) (0xc000730640) Stream removed, broadcasting: 1\nI0514 12:52:35.692721 3335 log.go:172] (0xc00082a2c0) (0xc0005c2be0) Stream removed, broadcasting: 3\nI0514 12:52:35.692766 3335 log.go:172] (0xc00082a2c0) Go away received\nI0514 12:52:35.692840 3335 log.go:172] (0xc00082a2c0) (0xc0007306e0) Stream removed, broadcasting: 5\n" May 14 12:52:35.696: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" May 14 12:52:35.696: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' May 14 12:52:35.699: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true May 14 12:52:45.703: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false May 14 12:52:45.703: INFO: Waiting for statefulset status.replicas updated to 0 May 14 12:52:45.723: INFO: POD NODE PHASE GRACE CONDITIONS May 14 12:52:45.723: INFO: ss-0 hunter-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 12:52:25 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-14 12:52:36 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-14 12:52:36 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 12:52:25 +0000 UTC }] May 14 12:52:45.723: INFO: May 14 12:52:45.723: INFO: StatefulSet ss has not reached scale 3, at 1 May 14 12:52:46.727: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.990719266s May 14 12:52:47.734: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.986814289s May 14 12:52:48.914: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.980289246s May 14 12:52:49.917: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.800460344s May 14 12:52:50.968: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.796755003s May 14 12:52:51.974: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.74583539s May 14 12:52:52.992: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.740308155s May 14 12:52:53.997: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.721883043s May 14 12:52:55.002: INFO: Verifying statefulset ss doesn't scale past 3 for another 717.089498ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace e2e-tests-statefulset-9x9rm May 14 12:52:56.007: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-9x9rm ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 14 12:52:56.228: INFO: stderr: "I0514 12:52:56.128872 3357 log.go:172] (0xc00015c6e0) (0xc00075e640) Create stream\nI0514 12:52:56.128929 3357 log.go:172] (0xc00015c6e0) (0xc00075e640) Stream added, broadcasting: 1\nI0514 12:52:56.132026 3357 log.go:172] (0xc00015c6e0) Reply frame received for 1\nI0514 12:52:56.132086 3357 log.go:172] (0xc00015c6e0) (0xc0005c4c80) Create stream\nI0514 12:52:56.132111 3357 log.go:172] (0xc00015c6e0) (0xc0005c4c80) Stream added, broadcasting: 3\nI0514 12:52:56.133276 3357 log.go:172] (0xc00015c6e0) Reply frame received for 3\nI0514 12:52:56.133332 3357 log.go:172] (0xc00015c6e0) (0xc0005c4dc0) Create stream\nI0514 12:52:56.133355 3357 log.go:172] (0xc00015c6e0) (0xc0005c4dc0) Stream added, broadcasting: 5\nI0514 12:52:56.134276 3357 log.go:172] (0xc00015c6e0) Reply frame received for 5\nI0514 12:52:56.221789 3357 log.go:172] (0xc00015c6e0) Data frame received for 3\nI0514 12:52:56.221821 3357 log.go:172] (0xc0005c4c80) (3) Data frame handling\nI0514 12:52:56.221828 3357 log.go:172] (0xc0005c4c80) (3) Data frame sent\nI0514 12:52:56.221835 3357 log.go:172] (0xc00015c6e0) Data frame received for 3\nI0514 12:52:56.221851 3357 log.go:172] (0xc00015c6e0) Data frame received for 5\nI0514 12:52:56.221883 3357 log.go:172] (0xc0005c4dc0) (5) Data frame handling\nI0514 12:52:56.221912 3357 log.go:172] (0xc0005c4c80) (3) Data frame handling\nI0514 12:52:56.223580 3357 log.go:172] (0xc00015c6e0) Data frame received for 1\nI0514 12:52:56.223628 3357 log.go:172] (0xc00075e640) (1) Data frame handling\nI0514 12:52:56.223656 3357 log.go:172] (0xc00075e640) (1) Data frame sent\nI0514 12:52:56.223677 3357 log.go:172] (0xc00015c6e0) (0xc00075e640) Stream removed, broadcasting: 1\nI0514 12:52:56.223718 3357 log.go:172] (0xc00015c6e0) Go away received\nI0514 12:52:56.223845 3357 log.go:172] (0xc00015c6e0) (0xc00075e640) Stream removed, broadcasting: 1\nI0514 12:52:56.223860 3357 log.go:172] (0xc00015c6e0) (0xc0005c4c80) Stream removed, broadcasting: 3\nI0514 12:52:56.223867 3357 log.go:172] (0xc00015c6e0) (0xc0005c4dc0) Stream removed, broadcasting: 5\n" May 14 12:52:56.228: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" May 14 12:52:56.228: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' May 14 12:52:56.228: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-9x9rm ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 14 12:52:56.456: INFO: stderr: "I0514 12:52:56.357520 3380 log.go:172] (0xc000162840) (0xc000710640) Create stream\nI0514 12:52:56.357581 3380 log.go:172] (0xc000162840) (0xc000710640) Stream added, broadcasting: 1\nI0514 12:52:56.359686 3380 log.go:172] (0xc000162840) Reply frame received for 1\nI0514 12:52:56.359741 3380 log.go:172] (0xc000162840) (0xc0007a8d20) Create stream\nI0514 12:52:56.359766 3380 log.go:172] (0xc000162840) (0xc0007a8d20) Stream added, broadcasting: 3\nI0514 12:52:56.360653 3380 log.go:172] (0xc000162840) Reply frame received for 3\nI0514 12:52:56.360693 3380 log.go:172] (0xc000162840) (0xc000796000) Create stream\nI0514 12:52:56.360705 3380 log.go:172] (0xc000162840) (0xc000796000) Stream added, broadcasting: 5\nI0514 12:52:56.361717 3380 log.go:172] (0xc000162840) Reply frame received for 5\nI0514 12:52:56.448723 3380 log.go:172] (0xc000162840) Data frame received for 5\nI0514 12:52:56.448801 3380 log.go:172] (0xc000796000) (5) Data frame handling\nI0514 12:52:56.448829 3380 log.go:172] (0xc000796000) (5) Data frame sent\nmv: can't rename '/tmp/index.html': No such file or directory\nI0514 12:52:56.448864 3380 log.go:172] (0xc000162840) Data frame received for 3\nI0514 12:52:56.448885 3380 log.go:172] (0xc0007a8d20) (3) Data frame handling\nI0514 12:52:56.448897 3380 log.go:172] (0xc0007a8d20) (3) Data frame sent\nI0514 12:52:56.448909 3380 log.go:172] (0xc000162840) Data frame received for 3\nI0514 12:52:56.448919 3380 log.go:172] (0xc0007a8d20) (3) Data frame handling\nI0514 12:52:56.449381 3380 log.go:172] (0xc000162840) Data frame received for 5\nI0514 12:52:56.449432 3380 log.go:172] (0xc000796000) (5) Data frame handling\nI0514 12:52:56.451444 3380 log.go:172] (0xc000162840) Data frame received for 1\nI0514 12:52:56.451478 3380 log.go:172] (0xc000710640) (1) Data frame handling\nI0514 12:52:56.451508 3380 log.go:172] (0xc000710640) (1) Data frame sent\nI0514 12:52:56.451534 3380 log.go:172] (0xc000162840) (0xc000710640) Stream removed, broadcasting: 1\nI0514 12:52:56.451559 3380 log.go:172] (0xc000162840) Go away received\nI0514 12:52:56.451816 3380 log.go:172] (0xc000162840) (0xc000710640) Stream removed, broadcasting: 1\nI0514 12:52:56.451843 3380 log.go:172] (0xc000162840) (0xc0007a8d20) Stream removed, broadcasting: 3\nI0514 12:52:56.451857 3380 log.go:172] (0xc000162840) (0xc000796000) Stream removed, broadcasting: 5\n" May 14 12:52:56.456: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" May 14 12:52:56.456: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' May 14 12:52:56.456: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-9x9rm ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 14 12:52:56.701: INFO: stderr: "I0514 12:52:56.626766 3403 log.go:172] (0xc0008242c0) (0xc000722640) Create stream\nI0514 12:52:56.626822 3403 log.go:172] (0xc0008242c0) (0xc000722640) Stream added, broadcasting: 1\nI0514 12:52:56.628591 3403 log.go:172] (0xc0008242c0) Reply frame received for 1\nI0514 12:52:56.628618 3403 log.go:172] (0xc0008242c0) (0xc000604be0) Create stream\nI0514 12:52:56.628626 3403 log.go:172] (0xc0008242c0) (0xc000604be0) Stream added, broadcasting: 3\nI0514 12:52:56.629609 3403 log.go:172] (0xc0008242c0) Reply frame received for 3\nI0514 12:52:56.629655 3403 log.go:172] (0xc0008242c0) (0xc0002d0000) Create stream\nI0514 12:52:56.629684 3403 log.go:172] (0xc0008242c0) (0xc0002d0000) Stream added, broadcasting: 5\nI0514 12:52:56.630657 3403 log.go:172] (0xc0008242c0) Reply frame received for 5\nI0514 12:52:56.696880 3403 log.go:172] (0xc0008242c0) Data frame received for 3\nI0514 12:52:56.696917 3403 log.go:172] (0xc000604be0) (3) Data frame handling\nI0514 12:52:56.696935 3403 log.go:172] (0xc0008242c0) Data frame received for 5\nI0514 12:52:56.696959 3403 log.go:172] (0xc0002d0000) (5) Data frame handling\nI0514 12:52:56.696970 3403 log.go:172] (0xc0002d0000) (5) Data frame sent\nmv: can't rename '/tmp/index.html': No such file or directory\nI0514 12:52:56.696991 3403 log.go:172] (0xc000604be0) (3) Data frame sent\nI0514 12:52:56.697032 3403 log.go:172] (0xc0008242c0) Data frame received for 3\nI0514 12:52:56.697045 3403 log.go:172] (0xc000604be0) (3) Data frame handling\nI0514 12:52:56.697064 3403 log.go:172] (0xc0008242c0) Data frame received for 5\nI0514 12:52:56.697073 3403 log.go:172] (0xc0002d0000) (5) Data frame handling\nI0514 12:52:56.698531 3403 log.go:172] (0xc0008242c0) Data frame received for 1\nI0514 12:52:56.698561 3403 log.go:172] (0xc000722640) (1) Data frame handling\nI0514 12:52:56.698573 3403 log.go:172] (0xc000722640) (1) Data frame sent\nI0514 12:52:56.698588 3403 log.go:172] (0xc0008242c0) (0xc000722640) Stream removed, broadcasting: 1\nI0514 12:52:56.698614 3403 log.go:172] (0xc0008242c0) Go away received\nI0514 12:52:56.698749 3403 log.go:172] (0xc0008242c0) (0xc000722640) Stream removed, broadcasting: 1\nI0514 12:52:56.698779 3403 log.go:172] (0xc0008242c0) (0xc000604be0) Stream removed, broadcasting: 3\nI0514 12:52:56.698797 3403 log.go:172] (0xc0008242c0) (0xc0002d0000) Stream removed, broadcasting: 5\n" May 14 12:52:56.701: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" May 14 12:52:56.701: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' May 14 12:52:56.705: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true May 14 12:52:56.705: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true May 14 12:52:56.705: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Scale down will not halt with unhealthy stateful pod May 14 12:52:56.708: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-9x9rm ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' May 14 12:52:56.914: INFO: stderr: "I0514 12:52:56.840852 3425 log.go:172] (0xc000760370) (0xc000780640) Create stream\nI0514 12:52:56.840914 3425 log.go:172] (0xc000760370) (0xc000780640) Stream added, broadcasting: 1\nI0514 12:52:56.843569 3425 log.go:172] (0xc000760370) Reply frame received for 1\nI0514 12:52:56.843602 3425 log.go:172] (0xc000760370) (0xc0007806e0) Create stream\nI0514 12:52:56.843611 3425 log.go:172] (0xc000760370) (0xc0007806e0) Stream added, broadcasting: 3\nI0514 12:52:56.844540 3425 log.go:172] (0xc000760370) Reply frame received for 3\nI0514 12:52:56.844580 3425 log.go:172] (0xc000760370) (0xc000780780) Create stream\nI0514 12:52:56.844596 3425 log.go:172] (0xc000760370) (0xc000780780) Stream added, broadcasting: 5\nI0514 12:52:56.845712 3425 log.go:172] (0xc000760370) Reply frame received for 5\nI0514 12:52:56.908029 3425 log.go:172] (0xc000760370) Data frame received for 5\nI0514 12:52:56.908092 3425 log.go:172] (0xc000780780) (5) Data frame handling\nI0514 12:52:56.908131 3425 log.go:172] (0xc000760370) Data frame received for 3\nI0514 12:52:56.908156 3425 log.go:172] (0xc0007806e0) (3) Data frame handling\nI0514 12:52:56.908185 3425 log.go:172] (0xc0007806e0) (3) Data frame sent\nI0514 12:52:56.908217 3425 log.go:172] (0xc000760370) Data frame received for 3\nI0514 12:52:56.908247 3425 log.go:172] (0xc0007806e0) (3) Data frame handling\nI0514 12:52:56.909910 3425 log.go:172] (0xc000760370) Data frame received for 1\nI0514 12:52:56.909950 3425 log.go:172] (0xc000780640) (1) Data frame handling\nI0514 12:52:56.909965 3425 log.go:172] (0xc000780640) (1) Data frame sent\nI0514 12:52:56.909993 3425 log.go:172] (0xc000760370) (0xc000780640) Stream removed, broadcasting: 1\nI0514 12:52:56.910025 3425 log.go:172] (0xc000760370) Go away received\nI0514 12:52:56.910453 3425 log.go:172] (0xc000760370) (0xc000780640) Stream removed, broadcasting: 1\nI0514 12:52:56.910473 3425 log.go:172] (0xc000760370) (0xc0007806e0) Stream removed, broadcasting: 3\nI0514 12:52:56.910485 3425 log.go:172] (0xc000760370) (0xc000780780) Stream removed, broadcasting: 5\n" May 14 12:52:56.914: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" May 14 12:52:56.914: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' May 14 12:52:56.914: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-9x9rm ss-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' May 14 12:52:57.204: INFO: stderr: "I0514 12:52:57.079929 3448 log.go:172] (0xc000138580) (0xc00029b5e0) Create stream\nI0514 12:52:57.080054 3448 log.go:172] (0xc000138580) (0xc00029b5e0) Stream added, broadcasting: 1\nI0514 12:52:57.082680 3448 log.go:172] (0xc000138580) Reply frame received for 1\nI0514 12:52:57.082719 3448 log.go:172] (0xc000138580) (0xc0004f0000) Create stream\nI0514 12:52:57.082731 3448 log.go:172] (0xc000138580) (0xc0004f0000) Stream added, broadcasting: 3\nI0514 12:52:57.083438 3448 log.go:172] (0xc000138580) Reply frame received for 3\nI0514 12:52:57.083467 3448 log.go:172] (0xc000138580) (0xc00029b680) Create stream\nI0514 12:52:57.083479 3448 log.go:172] (0xc000138580) (0xc00029b680) Stream added, broadcasting: 5\nI0514 12:52:57.084376 3448 log.go:172] (0xc000138580) Reply frame received for 5\nI0514 12:52:57.196804 3448 log.go:172] (0xc000138580) Data frame received for 3\nI0514 12:52:57.196856 3448 log.go:172] (0xc0004f0000) (3) Data frame handling\nI0514 12:52:57.196897 3448 log.go:172] (0xc0004f0000) (3) Data frame sent\nI0514 12:52:57.196968 3448 log.go:172] (0xc000138580) Data frame received for 3\nI0514 12:52:57.196985 3448 log.go:172] (0xc0004f0000) (3) Data frame handling\nI0514 12:52:57.197058 3448 log.go:172] (0xc000138580) Data frame received for 5\nI0514 12:52:57.197350 3448 log.go:172] (0xc00029b680) (5) Data frame handling\nI0514 12:52:57.198797 3448 log.go:172] (0xc000138580) Data frame received for 1\nI0514 12:52:57.198812 3448 log.go:172] (0xc00029b5e0) (1) Data frame handling\nI0514 12:52:57.198821 3448 log.go:172] (0xc00029b5e0) (1) Data frame sent\nI0514 12:52:57.198835 3448 log.go:172] (0xc000138580) (0xc00029b5e0) Stream removed, broadcasting: 1\nI0514 12:52:57.198847 3448 log.go:172] (0xc000138580) Go away received\nI0514 12:52:57.199113 3448 log.go:172] (0xc000138580) (0xc00029b5e0) Stream removed, broadcasting: 1\nI0514 12:52:57.199139 3448 log.go:172] (0xc000138580) (0xc0004f0000) Stream removed, broadcasting: 3\nI0514 12:52:57.199152 3448 log.go:172] (0xc000138580) (0xc00029b680) Stream removed, broadcasting: 5\n" May 14 12:52:57.204: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" May 14 12:52:57.204: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' May 14 12:52:57.204: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-9x9rm ss-2 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' May 14 12:52:57.532: INFO: stderr: "I0514 12:52:57.355287 3471 log.go:172] (0xc0007ec2c0) (0xc00070a5a0) Create stream\nI0514 12:52:57.355356 3471 log.go:172] (0xc0007ec2c0) (0xc00070a5a0) Stream added, broadcasting: 1\nI0514 12:52:57.357438 3471 log.go:172] (0xc0007ec2c0) Reply frame received for 1\nI0514 12:52:57.357494 3471 log.go:172] (0xc0007ec2c0) (0xc0005d6be0) Create stream\nI0514 12:52:57.357517 3471 log.go:172] (0xc0007ec2c0) (0xc0005d6be0) Stream added, broadcasting: 3\nI0514 12:52:57.358223 3471 log.go:172] (0xc0007ec2c0) Reply frame received for 3\nI0514 12:52:57.358243 3471 log.go:172] (0xc0007ec2c0) (0xc0005d6d20) Create stream\nI0514 12:52:57.358252 3471 log.go:172] (0xc0007ec2c0) (0xc0005d6d20) Stream added, broadcasting: 5\nI0514 12:52:57.359220 3471 log.go:172] (0xc0007ec2c0) Reply frame received for 5\nI0514 12:52:57.524468 3471 log.go:172] (0xc0007ec2c0) Data frame received for 5\nI0514 12:52:57.524499 3471 log.go:172] (0xc0005d6d20) (5) Data frame handling\nI0514 12:52:57.524535 3471 log.go:172] (0xc0007ec2c0) Data frame received for 3\nI0514 12:52:57.524556 3471 log.go:172] (0xc0005d6be0) (3) Data frame handling\nI0514 12:52:57.524569 3471 log.go:172] (0xc0005d6be0) (3) Data frame sent\nI0514 12:52:57.524576 3471 log.go:172] (0xc0007ec2c0) Data frame received for 3\nI0514 12:52:57.524581 3471 log.go:172] (0xc0005d6be0) (3) Data frame handling\nI0514 12:52:57.526480 3471 log.go:172] (0xc0007ec2c0) Data frame received for 1\nI0514 12:52:57.526528 3471 log.go:172] (0xc00070a5a0) (1) Data frame handling\nI0514 12:52:57.526572 3471 log.go:172] (0xc00070a5a0) (1) Data frame sent\nI0514 12:52:57.526606 3471 log.go:172] (0xc0007ec2c0) (0xc00070a5a0) Stream removed, broadcasting: 1\nI0514 12:52:57.526642 3471 log.go:172] (0xc0007ec2c0) Go away received\nI0514 12:52:57.526936 3471 log.go:172] (0xc0007ec2c0) (0xc00070a5a0) Stream removed, broadcasting: 1\nI0514 12:52:57.526979 3471 log.go:172] (0xc0007ec2c0) (0xc0005d6be0) Stream removed, broadcasting: 3\nI0514 12:52:57.527023 3471 log.go:172] (0xc0007ec2c0) (0xc0005d6d20) Stream removed, broadcasting: 5\n" May 14 12:52:57.532: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" May 14 12:52:57.532: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' May 14 12:52:57.532: INFO: Waiting for statefulset status.replicas updated to 0 May 14 12:52:57.535: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2 May 14 12:53:07.544: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false May 14 12:53:07.544: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false May 14 12:53:07.544: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false May 14 12:53:07.563: INFO: POD NODE PHASE GRACE CONDITIONS May 14 12:53:07.563: INFO: ss-0 hunter-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 12:52:25 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-14 12:52:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-14 12:52:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 12:52:25 +0000 UTC }] May 14 12:53:07.563: INFO: ss-1 hunter-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 12:52:45 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-14 12:52:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-14 12:52:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 12:52:45 +0000 UTC }] May 14 12:53:07.563: INFO: ss-2 hunter-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 12:52:45 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-14 12:52:58 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-14 12:52:58 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 12:52:45 +0000 UTC }] May 14 12:53:07.563: INFO: May 14 12:53:07.563: INFO: StatefulSet ss has not reached scale 0, at 3 May 14 12:53:08.567: INFO: POD NODE PHASE GRACE CONDITIONS May 14 12:53:08.567: INFO: ss-0 hunter-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 12:52:25 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-14 12:52:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-14 12:52:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 12:52:25 +0000 UTC }] May 14 12:53:08.567: INFO: ss-1 hunter-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 12:52:45 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-14 12:52:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-14 12:52:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 12:52:45 +0000 UTC }] May 14 12:53:08.567: INFO: ss-2 hunter-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 12:52:45 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-14 12:52:58 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-14 12:52:58 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 12:52:45 +0000 UTC }] May 14 12:53:08.567: INFO: May 14 12:53:08.567: INFO: StatefulSet ss has not reached scale 0, at 3 May 14 12:53:09.605: INFO: POD NODE PHASE GRACE CONDITIONS May 14 12:53:09.605: INFO: ss-0 hunter-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 12:52:25 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-14 12:52:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-14 12:52:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 12:52:25 +0000 UTC }] May 14 12:53:09.605: INFO: ss-1 hunter-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 12:52:45 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-14 12:52:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-14 12:52:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 12:52:45 +0000 UTC }] May 14 12:53:09.605: INFO: ss-2 hunter-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 12:52:45 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-14 12:52:58 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-14 12:52:58 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 12:52:45 +0000 UTC }] May 14 12:53:09.605: INFO: May 14 12:53:09.605: INFO: StatefulSet ss has not reached scale 0, at 3 May 14 12:53:10.662: INFO: POD NODE PHASE GRACE CONDITIONS May 14 12:53:10.663: INFO: ss-0 hunter-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 12:52:25 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-14 12:52:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-14 12:52:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 12:52:25 +0000 UTC }] May 14 12:53:10.663: INFO: ss-1 hunter-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 12:52:45 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-14 12:52:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-14 12:52:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 12:52:45 +0000 UTC }] May 14 12:53:10.663: INFO: ss-2 hunter-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 12:52:45 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-14 12:52:58 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-14 12:52:58 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 12:52:45 +0000 UTC }] May 14 12:53:10.663: INFO: May 14 12:53:10.663: INFO: StatefulSet ss has not reached scale 0, at 3 May 14 12:53:11.667: INFO: POD NODE PHASE GRACE CONDITIONS May 14 12:53:11.667: INFO: ss-0 hunter-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 12:52:25 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-14 12:52:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-14 12:52:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 12:52:25 +0000 UTC }] May 14 12:53:11.667: INFO: ss-1 hunter-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 12:52:45 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-14 12:52:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-14 12:52:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 12:52:45 +0000 UTC }] May 14 12:53:11.667: INFO: ss-2 hunter-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 12:52:45 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-14 12:52:58 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-14 12:52:58 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 12:52:45 +0000 UTC }] May 14 12:53:11.667: INFO: May 14 12:53:11.667: INFO: StatefulSet ss has not reached scale 0, at 3 May 14 12:53:12.671: INFO: POD NODE PHASE GRACE CONDITIONS May 14 12:53:12.671: INFO: ss-0 hunter-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 12:52:25 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-14 12:52:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-14 12:52:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 12:52:25 +0000 UTC }] May 14 12:53:12.671: INFO: ss-1 hunter-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 12:52:45 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-14 12:52:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-14 12:52:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 12:52:45 +0000 UTC }] May 14 12:53:12.672: INFO: ss-2 hunter-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 12:52:45 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-14 12:52:58 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-14 12:52:58 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 12:52:45 +0000 UTC }] May 14 12:53:12.672: INFO: May 14 12:53:12.672: INFO: StatefulSet ss has not reached scale 0, at 3 May 14 12:53:13.675: INFO: POD NODE PHASE GRACE CONDITIONS May 14 12:53:13.675: INFO: ss-0 hunter-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 12:52:25 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-14 12:52:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-14 12:52:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 12:52:25 +0000 UTC }] May 14 12:53:13.675: INFO: ss-1 hunter-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 12:52:45 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-14 12:52:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-14 12:52:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 12:52:45 +0000 UTC }] May 14 12:53:13.675: INFO: ss-2 hunter-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 12:52:45 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-14 12:52:58 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-14 12:52:58 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 12:52:45 +0000 UTC }] May 14 12:53:13.675: INFO: May 14 12:53:13.675: INFO: StatefulSet ss has not reached scale 0, at 3 May 14 12:53:14.680: INFO: POD NODE PHASE GRACE CONDITIONS May 14 12:53:14.680: INFO: ss-0 hunter-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 12:52:25 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-14 12:52:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-14 12:52:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 12:52:25 +0000 UTC }] May 14 12:53:14.681: INFO: ss-1 hunter-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 12:52:45 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-14 12:52:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-14 12:52:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 12:52:45 +0000 UTC }] May 14 12:53:14.681: INFO: ss-2 hunter-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 12:52:45 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-14 12:52:58 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-14 12:52:58 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 12:52:45 +0000 UTC }] May 14 12:53:14.681: INFO: May 14 12:53:14.681: INFO: StatefulSet ss has not reached scale 0, at 3 May 14 12:53:15.686: INFO: POD NODE PHASE GRACE CONDITIONS May 14 12:53:15.686: INFO: ss-0 hunter-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 12:52:25 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-14 12:52:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-14 12:52:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 12:52:25 +0000 UTC }] May 14 12:53:15.686: INFO: ss-1 hunter-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 12:52:45 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-14 12:52:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-14 12:52:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 12:52:45 +0000 UTC }] May 14 12:53:15.686: INFO: ss-2 hunter-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 12:52:45 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-14 12:52:58 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-14 12:52:58 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 12:52:45 +0000 UTC }] May 14 12:53:15.686: INFO: May 14 12:53:15.686: INFO: StatefulSet ss has not reached scale 0, at 3 May 14 12:53:16.692: INFO: POD NODE PHASE GRACE CONDITIONS May 14 12:53:16.692: INFO: ss-0 hunter-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 12:52:25 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-14 12:52:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-14 12:52:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 12:52:25 +0000 UTC }] May 14 12:53:16.692: INFO: ss-1 hunter-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 12:52:45 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-14 12:52:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-14 12:52:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 12:52:45 +0000 UTC }] May 14 12:53:16.692: INFO: ss-2 hunter-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 12:52:45 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-14 12:52:58 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-14 12:52:58 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 12:52:45 +0000 UTC }] May 14 12:53:16.692: INFO: May 14 12:53:16.692: INFO: StatefulSet ss has not reached scale 0, at 3 STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacee2e-tests-statefulset-9x9rm May 14 12:53:17.696: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-9x9rm ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 14 12:53:17.833: INFO: rc: 1 May 14 12:53:17.834: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-9x9rm ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] error: unable to upgrade connection: container not found ("nginx") [] 0xc002523620 exit status 1 true [0xc000f9a328 0xc000f9a340 0xc000f9a358] [0xc000f9a328 0xc000f9a340 0xc000f9a358] [0xc000f9a338 0xc000f9a350] [0x935700 0x935700] 0xc0017b7320 }: Command stdout: stderr: error: unable to upgrade connection: container not found ("nginx") error: exit status 1 May 14 12:53:27.834: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-9x9rm ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 14 12:53:27.918: INFO: rc: 1 May 14 12:53:27.918: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-9x9rm ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc002523770 exit status 1 true [0xc000f9a360 0xc000f9a378 0xc000f9a390] [0xc000f9a360 0xc000f9a378 0xc000f9a390] [0xc000f9a370 0xc000f9a388] [0x935700 0x935700] 0xc0017b7740 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 14 12:53:37.919: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-9x9rm ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 14 12:53:38.020: INFO: rc: 1 May 14 12:53:38.020: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-9x9rm ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc000aaa120 exit status 1 true [0xc00016e000 0xc0006a00d8 0xc0006a0178] [0xc00016e000 0xc0006a00d8 0xc0006a0178] [0xc0006a0098 0xc0006a00f8] [0x935700 0x935700] 0xc0016453e0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 14 12:53:48.020: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-9x9rm ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 14 12:53:48.105: INFO: rc: 1 May 14 12:53:48.105: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-9x9rm ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001fee120 exit status 1 true [0xc000bec000 0xc000bec018 0xc000bec030] [0xc000bec000 0xc000bec018 0xc000bec030] [0xc000bec010 0xc000bec028] [0x935700 0x935700] 0xc00127bc80 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 14 12:53:58.105: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-9x9rm ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 14 12:53:58.204: INFO: rc: 1 May 14 12:53:58.204: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-9x9rm ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001fee3c0 exit status 1 true [0xc000bec038 0xc000bec050 0xc000bec068] [0xc000bec038 0xc000bec050 0xc000bec068] [0xc000bec048 0xc000bec060] [0x935700 0x935700] 0xc000874780 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 14 12:54:08.204: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-9x9rm ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 14 12:54:08.285: INFO: rc: 1 May 14 12:54:08.285: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-9x9rm ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc000aaa270 exit status 1 true [0xc0006a0198 0xc0006a0210 0xc0006a0240] [0xc0006a0198 0xc0006a0210 0xc0006a0240] [0xc0006a01c8 0xc0006a0238] [0x935700 0x935700] 0xc0014de180 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 14 12:54:18.285: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-9x9rm ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 14 12:54:18.380: INFO: rc: 1 May 14 12:54:18.380: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-9x9rm ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc000aaa390 exit status 1 true [0xc0006a0250 0xc0006a0278 0xc0006a02f0] [0xc0006a0250 0xc0006a0278 0xc0006a02f0] [0xc0006a0268 0xc0006a0298] [0x935700 0x935700] 0xc0014df440 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 14 12:54:28.380: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-9x9rm ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 14 12:54:28.693: INFO: rc: 1 May 14 12:54:28.693: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-9x9rm ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001fee6c0 exit status 1 true [0xc000bec070 0xc000bec088 0xc000bec0a0] [0xc000bec070 0xc000bec088 0xc000bec0a0] [0xc000bec080 0xc000bec098] [0x935700 0x935700] 0xc000874c00 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 14 12:54:38.693: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-9x9rm ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 14 12:54:38.782: INFO: rc: 1 May 14 12:54:38.782: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-9x9rm ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001594120 exit status 1 true [0xc000f9a000 0xc000f9a018 0xc000f9a030] [0xc000f9a000 0xc000f9a018 0xc000f9a030] [0xc000f9a010 0xc000f9a028] [0x935700 0x935700] 0xc0010ce840 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 14 12:54:48.783: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-9x9rm ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 14 12:54:49.219: INFO: rc: 1 May 14 12:54:49.219: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-9x9rm ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001594240 exit status 1 true [0xc000f9a038 0xc000f9a050 0xc000f9a068] [0xc000f9a038 0xc000f9a050 0xc000f9a068] [0xc000f9a048 0xc000f9a060] [0x935700 0x935700] 0xc0010cec00 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 14 12:54:59.220: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-9x9rm ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 14 12:54:59.305: INFO: rc: 1 May 14 12:54:59.305: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-9x9rm ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001594450 exit status 1 true [0xc000f9a070 0xc000f9a088 0xc000f9a0a0] [0xc000f9a070 0xc000f9a088 0xc000f9a0a0] [0xc000f9a080 0xc000f9a098] [0x935700 0x935700] 0xc0010cf080 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 14 12:55:09.305: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-9x9rm ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 14 12:55:09.384: INFO: rc: 1 May 14 12:55:09.384: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-9x9rm ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001fee900 exit status 1 true [0xc000bec0a8 0xc000bec0c0 0xc000bec0d8] [0xc000bec0a8 0xc000bec0c0 0xc000bec0d8] [0xc000bec0b8 0xc000bec0d0] [0x935700 0x935700] 0xc000874f60 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 14 12:55:19.384: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-9x9rm ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 14 12:55:19.474: INFO: rc: 1 May 14 12:55:19.474: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-9x9rm ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc000fe6990 exit status 1 true [0xc0003e2020 0xc0003e2090 0xc0003e20d0] [0xc0003e2020 0xc0003e2090 0xc0003e20d0] [0xc0003e2068 0xc0003e20c8] [0x935700 0x935700] 0xc00186ea20 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 14 12:55:29.474: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-9x9rm ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 14 12:55:29.565: INFO: rc: 1 May 14 12:55:29.565: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-9x9rm ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc000fe6b10 exit status 1 true [0xc0003e20f0 0xc0003e2130 0xc0003e2168] [0xc0003e20f0 0xc0003e2130 0xc0003e2168] [0xc0003e2128 0xc0003e2158] [0x935700 0x935700] 0xc00186f560 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 14 12:55:39.566: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-9x9rm ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 14 12:55:39.658: INFO: rc: 1 May 14 12:55:39.658: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-9x9rm ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0015945a0 exit status 1 true [0xc000f9a0b0 0xc000f9a0c8 0xc000f9a0e0] [0xc000f9a0b0 0xc000f9a0c8 0xc000f9a0e0] [0xc000f9a0c0 0xc000f9a0d8] [0x935700 0x935700] 0xc0010cf500 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 14 12:55:49.658: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-9x9rm ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 14 12:55:49.754: INFO: rc: 1 May 14 12:55:49.754: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-9x9rm ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001fee150 exit status 1 true [0xc00016e000 0xc000f9a000 0xc000f9a018] [0xc00016e000 0xc000f9a000 0xc000f9a018] [0xc00000e100 0xc000f9a010] [0x935700 0x935700] 0xc00127bc80 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 14 12:55:59.755: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-9x9rm ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 14 12:55:59.848: INFO: rc: 1 May 14 12:55:59.849: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-9x9rm ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001594150 exit status 1 true [0xc000bec000 0xc000bec018 0xc000bec030] [0xc000bec000 0xc000bec018 0xc000bec030] [0xc000bec010 0xc000bec028] [0x935700 0x935700] 0xc0010ce7e0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 14 12:56:09.849: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-9x9rm ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 14 12:56:09.927: INFO: rc: 1 May 14 12:56:09.927: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-9x9rm ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0015942a0 exit status 1 true [0xc000bec038 0xc000bec050 0xc000bec068] [0xc000bec038 0xc000bec050 0xc000bec068] [0xc000bec048 0xc000bec060] [0x935700 0x935700] 0xc0010ceba0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 14 12:56:19.927: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-9x9rm ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 14 12:56:20.037: INFO: rc: 1 May 14 12:56:20.037: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-9x9rm ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001fee3f0 exit status 1 true [0xc000f9a020 0xc000f9a038 0xc000f9a050] [0xc000f9a020 0xc000f9a038 0xc000f9a050] [0xc000f9a030 0xc000f9a048] [0x935700 0x935700] 0xc001645bc0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 14 12:56:30.037: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-9x9rm ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 14 12:56:30.123: INFO: rc: 1 May 14 12:56:30.123: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-9x9rm ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc000aaa1b0 exit status 1 true [0xc0003e2020 0xc0003e2090 0xc0003e20d0] [0xc0003e2020 0xc0003e2090 0xc0003e20d0] [0xc0003e2068 0xc0003e20c8] [0x935700 0x935700] 0xc000874720 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 14 12:56:40.123: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-9x9rm ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 14 12:56:40.222: INFO: rc: 1 May 14 12:56:40.222: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-9x9rm ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001fee7e0 exit status 1 true [0xc000f9a058 0xc000f9a070 0xc000f9a088] [0xc000f9a058 0xc000f9a070 0xc000f9a088] [0xc000f9a068 0xc000f9a080] [0x935700 0x935700] 0xc00186e660 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 14 12:56:50.223: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-9x9rm ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 14 12:56:50.309: INFO: rc: 1 May 14 12:56:50.309: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-9x9rm ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001fee930 exit status 1 true [0xc000f9a090 0xc000f9a0e8 0xc000f9a100] [0xc000f9a090 0xc000f9a0e8 0xc000f9a100] [0xc000f9a0a0 0xc000f9a0f8] [0x935700 0x935700] 0xc00186f140 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 14 12:57:00.309: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-9x9rm ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 14 12:57:00.405: INFO: rc: 1 May 14 12:57:00.405: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-9x9rm ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001594630 exit status 1 true [0xc000bec070 0xc000bec088 0xc000bec0a0] [0xc000bec070 0xc000bec088 0xc000bec0a0] [0xc000bec080 0xc000bec098] [0x935700 0x935700] 0xc0010cefc0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 14 12:57:10.406: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-9x9rm ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 14 12:57:10.516: INFO: rc: 1 May 14 12:57:10.517: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-9x9rm ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001594780 exit status 1 true [0xc000bec0a8 0xc000bec0c0 0xc000bec0d8] [0xc000bec0a8 0xc000bec0c0 0xc000bec0d8] [0xc000bec0b8 0xc000bec0d0] [0x935700 0x935700] 0xc0010cf860 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 14 12:57:20.517: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-9x9rm ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 14 12:57:20.611: INFO: rc: 1 May 14 12:57:20.611: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-9x9rm ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0015948a0 exit status 1 true [0xc000bec0e0 0xc000bec0f8 0xc000bec110] [0xc000bec0e0 0xc000bec0f8 0xc000bec110] [0xc000bec0f0 0xc000bec108] [0x935700 0x935700] 0xc0010cfb00 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 14 12:57:30.612: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-9x9rm ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 14 12:57:30.692: INFO: rc: 1 May 14 12:57:30.692: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-9x9rm ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001feea80 exit status 1 true [0xc000f9a108 0xc000f9a120 0xc000f9a138] [0xc000f9a108 0xc000f9a120 0xc000f9a138] [0xc000f9a118 0xc000f9a130] [0x935700 0x935700] 0xc0014de180 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 14 12:57:40.692: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-9x9rm ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 14 12:57:40.785: INFO: rc: 1 May 14 12:57:40.785: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-9x9rm ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001fee120 exit status 1 true [0xc00016e000 0xc000f9a008 0xc000f9a020] [0xc00016e000 0xc000f9a008 0xc000f9a020] [0xc000f9a000 0xc000f9a018] [0x935700 0x935700] 0xc0016453e0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 14 12:57:50.786: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-9x9rm ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 14 12:57:50.878: INFO: rc: 1 May 14 12:57:50.878: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-9x9rm ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc000fe6780 exit status 1 true [0xc000bec000 0xc000bec018 0xc000bec030] [0xc000bec000 0xc000bec018 0xc000bec030] [0xc000bec010 0xc000bec028] [0x935700 0x935700] 0xc00127bc80 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 14 12:58:00.878: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-9x9rm ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 14 12:58:01.002: INFO: rc: 1 May 14 12:58:01.002: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-9x9rm ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001594120 exit status 1 true [0xc0003e2020 0xc0003e2090 0xc0003e20d0] [0xc0003e2020 0xc0003e2090 0xc0003e20d0] [0xc0003e2068 0xc0003e20c8] [0x935700 0x935700] 0xc0014de600 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 14 12:58:11.002: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-9x9rm ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 14 12:58:11.092: INFO: rc: 1 May 14 12:58:11.092: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-9x9rm ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc000fe6a80 exit status 1 true [0xc000bec038 0xc000bec050 0xc000bec068] [0xc000bec038 0xc000bec050 0xc000bec068] [0xc000bec048 0xc000bec060] [0x935700 0x935700] 0xc00186eb40 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 14 12:58:21.092: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-9x9rm ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 14 12:58:21.179: INFO: rc: 1 May 14 12:58:21.179: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: May 14 12:58:21.179: INFO: Scaling statefulset ss to 0 May 14 12:58:21.188: INFO: Waiting for statefulset status.replicas updated to 0 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85 May 14 12:58:21.190: INFO: Deleting all statefulset in ns e2e-tests-statefulset-9x9rm May 14 12:58:21.192: INFO: Scaling statefulset ss to 0 May 14 12:58:21.198: INFO: Waiting for statefulset status.replicas updated to 0 May 14 12:58:21.200: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 14 12:58:21.218: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-statefulset-9x9rm" for this suite. May 14 12:58:27.337: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 12:58:27.368: INFO: namespace: e2e-tests-statefulset-9x9rm, resource: bindings, ignored listing per whitelist May 14 12:58:27.404: INFO: namespace e2e-tests-statefulset-9x9rm deletion completed in 6.182237139s • [SLOW TEST:362.492 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 14 12:58:27.404: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-projected-all-test-volume-9a7f10fe-95e2-11ea-9b22-0242ac110018 STEP: Creating secret with name secret-projected-all-test-volume-9a7f10d7-95e2-11ea-9b22-0242ac110018 STEP: Creating a pod to test Check all projections for projected volume plugin May 14 12:58:27.589: INFO: Waiting up to 5m0s for pod "projected-volume-9a7f106e-95e2-11ea-9b22-0242ac110018" in namespace "e2e-tests-projected-mpdx2" to be "success or failure" May 14 12:58:27.639: INFO: Pod "projected-volume-9a7f106e-95e2-11ea-9b22-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 49.74271ms May 14 12:58:29.643: INFO: Pod "projected-volume-9a7f106e-95e2-11ea-9b22-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.053763305s May 14 12:58:31.647: INFO: Pod "projected-volume-9a7f106e-95e2-11ea-9b22-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.058052722s STEP: Saw pod success May 14 12:58:31.647: INFO: Pod "projected-volume-9a7f106e-95e2-11ea-9b22-0242ac110018" satisfied condition "success or failure" May 14 12:58:31.651: INFO: Trying to get logs from node hunter-worker pod projected-volume-9a7f106e-95e2-11ea-9b22-0242ac110018 container projected-all-volume-test: STEP: delete the pod May 14 12:58:31.712: INFO: Waiting for pod projected-volume-9a7f106e-95e2-11ea-9b22-0242ac110018 to disappear May 14 12:58:31.729: INFO: Pod projected-volume-9a7f106e-95e2-11ea-9b22-0242ac110018 no longer exists [AfterEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 14 12:58:31.729: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-mpdx2" for this suite. May 14 12:58:37.759: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 12:58:37.808: INFO: namespace: e2e-tests-projected-mpdx2, resource: bindings, ignored listing per whitelist May 14 12:58:37.840: INFO: namespace e2e-tests-projected-mpdx2 deletion completed in 6.105257967s • [SLOW TEST:10.436 seconds] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_combined.go:31 should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 14 12:58:37.840: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on default medium should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir volume type on node default medium May 14 12:58:37.988: INFO: Waiting up to 5m0s for pod "pod-a0b0a5de-95e2-11ea-9b22-0242ac110018" in namespace "e2e-tests-emptydir-4f4tb" to be "success or failure" May 14 12:58:38.020: INFO: Pod "pod-a0b0a5de-95e2-11ea-9b22-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 31.523992ms May 14 12:58:40.024: INFO: Pod "pod-a0b0a5de-95e2-11ea-9b22-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03561812s May 14 12:58:42.028: INFO: Pod "pod-a0b0a5de-95e2-11ea-9b22-0242ac110018": Phase="Running", Reason="", readiness=true. Elapsed: 4.040264541s May 14 12:58:44.033: INFO: Pod "pod-a0b0a5de-95e2-11ea-9b22-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.044948028s STEP: Saw pod success May 14 12:58:44.033: INFO: Pod "pod-a0b0a5de-95e2-11ea-9b22-0242ac110018" satisfied condition "success or failure" May 14 12:58:44.036: INFO: Trying to get logs from node hunter-worker2 pod pod-a0b0a5de-95e2-11ea-9b22-0242ac110018 container test-container: STEP: delete the pod May 14 12:58:44.108: INFO: Waiting for pod pod-a0b0a5de-95e2-11ea-9b22-0242ac110018 to disappear May 14 12:58:44.121: INFO: Pod pod-a0b0a5de-95e2-11ea-9b22-0242ac110018 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 14 12:58:44.121: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-4f4tb" for this suite. May 14 12:58:50.161: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 12:58:50.225: INFO: namespace: e2e-tests-emptydir-4f4tb, resource: bindings, ignored listing per whitelist May 14 12:58:50.240: INFO: namespace e2e-tests-emptydir-4f4tb deletion completed in 6.116115596s • [SLOW TEST:12.400 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 volume on default medium should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 14 12:58:50.241: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 14 12:58:54.465: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubelet-test-h67z9" for this suite. May 14 12:59:46.486: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 12:59:46.526: INFO: namespace: e2e-tests-kubelet-test-h67z9, resource: bindings, ignored listing per whitelist May 14 12:59:46.570: INFO: namespace e2e-tests-kubelet-test-h67z9 deletion completed in 52.101543002s • [SLOW TEST:56.329 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when scheduling a busybox command in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:40 should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 14 12:59:46.570: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-8shcp STEP: creating a selector STEP: Creating the service pods in kubernetes May 14 12:59:46.703: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods May 14 13:00:08.846: INFO: ExecWithOptions {Command:[/bin/sh -c echo 'hostName' | nc -w 1 -u 10.244.1.120 8081 | grep -v '^\s*$'] Namespace:e2e-tests-pod-network-test-8shcp PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 14 13:00:08.846: INFO: >>> kubeConfig: /root/.kube/config I0514 13:00:08.883774 6 log.go:172] (0xc0005e7ad0) (0xc001cfbb80) Create stream I0514 13:00:08.883824 6 log.go:172] (0xc0005e7ad0) (0xc001cfbb80) Stream added, broadcasting: 1 I0514 13:00:08.886421 6 log.go:172] (0xc0005e7ad0) Reply frame received for 1 I0514 13:00:08.886494 6 log.go:172] (0xc0005e7ad0) (0xc00105bae0) Create stream I0514 13:00:08.886518 6 log.go:172] (0xc0005e7ad0) (0xc00105bae0) Stream added, broadcasting: 3 I0514 13:00:08.887636 6 log.go:172] (0xc0005e7ad0) Reply frame received for 3 I0514 13:00:08.887676 6 log.go:172] (0xc0005e7ad0) (0xc001b3b220) Create stream I0514 13:00:08.887689 6 log.go:172] (0xc0005e7ad0) (0xc001b3b220) Stream added, broadcasting: 5 I0514 13:00:08.888621 6 log.go:172] (0xc0005e7ad0) Reply frame received for 5 I0514 13:00:09.991593 6 log.go:172] (0xc0005e7ad0) Data frame received for 5 I0514 13:00:09.991654 6 log.go:172] (0xc001b3b220) (5) Data frame handling I0514 13:00:09.991744 6 log.go:172] (0xc0005e7ad0) Data frame received for 3 I0514 13:00:09.991807 6 log.go:172] (0xc00105bae0) (3) Data frame handling I0514 13:00:09.991829 6 log.go:172] (0xc00105bae0) (3) Data frame sent I0514 13:00:09.991845 6 log.go:172] (0xc0005e7ad0) Data frame received for 3 I0514 13:00:09.991858 6 log.go:172] (0xc00105bae0) (3) Data frame handling I0514 13:00:09.994033 6 log.go:172] (0xc0005e7ad0) Data frame received for 1 I0514 13:00:09.994069 6 log.go:172] (0xc001cfbb80) (1) Data frame handling I0514 13:00:09.994088 6 log.go:172] (0xc001cfbb80) (1) Data frame sent I0514 13:00:09.994113 6 log.go:172] (0xc0005e7ad0) (0xc001cfbb80) Stream removed, broadcasting: 1 I0514 13:00:09.994142 6 log.go:172] (0xc0005e7ad0) Go away received I0514 13:00:09.994226 6 log.go:172] (0xc0005e7ad0) (0xc001cfbb80) Stream removed, broadcasting: 1 I0514 13:00:09.994270 6 log.go:172] (0xc0005e7ad0) (0xc00105bae0) Stream removed, broadcasting: 3 I0514 13:00:09.994296 6 log.go:172] (0xc0005e7ad0) (0xc001b3b220) Stream removed, broadcasting: 5 May 14 13:00:09.994: INFO: Found all expected endpoints: [netserver-0] May 14 13:00:09.997: INFO: ExecWithOptions {Command:[/bin/sh -c echo 'hostName' | nc -w 1 -u 10.244.2.211 8081 | grep -v '^\s*$'] Namespace:e2e-tests-pod-network-test-8shcp PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 14 13:00:09.997: INFO: >>> kubeConfig: /root/.kube/config I0514 13:00:10.031668 6 log.go:172] (0xc000ce04d0) (0xc001b3b4a0) Create stream I0514 13:00:10.031694 6 log.go:172] (0xc000ce04d0) (0xc001b3b4a0) Stream added, broadcasting: 1 I0514 13:00:10.034650 6 log.go:172] (0xc000ce04d0) Reply frame received for 1 I0514 13:00:10.034687 6 log.go:172] (0xc000ce04d0) (0xc00105bb80) Create stream I0514 13:00:10.034699 6 log.go:172] (0xc000ce04d0) (0xc00105bb80) Stream added, broadcasting: 3 I0514 13:00:10.035492 6 log.go:172] (0xc000ce04d0) Reply frame received for 3 I0514 13:00:10.035520 6 log.go:172] (0xc000ce04d0) (0xc0016ba640) Create stream I0514 13:00:10.035530 6 log.go:172] (0xc000ce04d0) (0xc0016ba640) Stream added, broadcasting: 5 I0514 13:00:10.036444 6 log.go:172] (0xc000ce04d0) Reply frame received for 5 I0514 13:00:11.119351 6 log.go:172] (0xc000ce04d0) Data frame received for 3 I0514 13:00:11.119380 6 log.go:172] (0xc00105bb80) (3) Data frame handling I0514 13:00:11.119388 6 log.go:172] (0xc00105bb80) (3) Data frame sent I0514 13:00:11.119599 6 log.go:172] (0xc000ce04d0) Data frame received for 3 I0514 13:00:11.119626 6 log.go:172] (0xc00105bb80) (3) Data frame handling I0514 13:00:11.119651 6 log.go:172] (0xc000ce04d0) Data frame received for 5 I0514 13:00:11.119664 6 log.go:172] (0xc0016ba640) (5) Data frame handling I0514 13:00:11.121842 6 log.go:172] (0xc000ce04d0) Data frame received for 1 I0514 13:00:11.121889 6 log.go:172] (0xc001b3b4a0) (1) Data frame handling I0514 13:00:11.121916 6 log.go:172] (0xc001b3b4a0) (1) Data frame sent I0514 13:00:11.121933 6 log.go:172] (0xc000ce04d0) (0xc001b3b4a0) Stream removed, broadcasting: 1 I0514 13:00:11.121951 6 log.go:172] (0xc000ce04d0) Go away received I0514 13:00:11.122130 6 log.go:172] (0xc000ce04d0) (0xc001b3b4a0) Stream removed, broadcasting: 1 I0514 13:00:11.122164 6 log.go:172] (0xc000ce04d0) (0xc00105bb80) Stream removed, broadcasting: 3 I0514 13:00:11.122172 6 log.go:172] (0xc000ce04d0) (0xc0016ba640) Stream removed, broadcasting: 5 May 14 13:00:11.122: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 14 13:00:11.122: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pod-network-test-8shcp" for this suite. May 14 13:00:35.181: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 13:00:35.191: INFO: namespace: e2e-tests-pod-network-test-8shcp, resource: bindings, ignored listing per whitelist May 14 13:00:35.252: INFO: namespace e2e-tests-pod-network-test-8shcp deletion completed in 24.100894153s • [SLOW TEST:48.682 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for node-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 14 13:00:35.252: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted May 14 13:00:43.266: INFO: 10 pods remaining May 14 13:00:43.266: INFO: 0 pods has nil DeletionTimestamp May 14 13:00:43.266: INFO: May 14 13:00:44.934: INFO: 0 pods remaining May 14 13:00:44.934: INFO: 0 pods has nil DeletionTimestamp May 14 13:00:44.934: INFO: May 14 13:00:45.329: INFO: 0 pods remaining May 14 13:00:45.329: INFO: 0 pods has nil DeletionTimestamp May 14 13:00:45.329: INFO: STEP: Gathering metrics W0514 13:00:46.647980 6 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 14 13:00:46.648: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 14 13:00:46.648: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-jbhfn" for this suite. May 14 13:00:53.227: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 13:00:53.254: INFO: namespace: e2e-tests-gc-jbhfn, resource: bindings, ignored listing per whitelist May 14 13:00:53.288: INFO: namespace e2e-tests-gc-jbhfn deletion completed in 6.620554s • [SLOW TEST:18.036 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 14 13:00:53.288: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65 [It] RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 14 13:00:53.403: INFO: Creating deployment "test-recreate-deployment" May 14 13:00:53.418: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1 May 14 13:00:53.427: INFO: new replicaset for deployment "test-recreate-deployment" is yet to be created May 14 13:00:55.483: INFO: Waiting deployment "test-recreate-deployment" to complete May 14 13:00:55.485: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725058053, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725058053, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725058053, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725058053, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)} May 14 13:00:57.487: INFO: Triggering a new rollout for deployment "test-recreate-deployment" May 14 13:00:57.491: INFO: Updating deployment test-recreate-deployment May 14 13:00:57.491: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59 May 14 13:00:58.188: INFO: Deployment "test-recreate-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment,GenerateName:,Namespace:e2e-tests-deployment-lrqt5,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-lrqt5/deployments/test-recreate-deployment,UID:f1708107-95e2-11ea-99e8-0242ac110002,ResourceVersion:10537520,Generation:2,CreationTimestamp:2020-05-14 13:00:53 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[{Available False 2020-05-14 13:00:57 +0000 UTC 2020-05-14 13:00:57 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2020-05-14 13:00:57 +0000 UTC 2020-05-14 13:00:53 +0000 UTC ReplicaSetUpdated ReplicaSet "test-recreate-deployment-589c4bfd" is progressing.}],ReadyReplicas:0,CollisionCount:nil,},} May 14 13:00:58.214: INFO: New ReplicaSet "test-recreate-deployment-589c4bfd" of Deployment "test-recreate-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-589c4bfd,GenerateName:,Namespace:e2e-tests-deployment-lrqt5,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-lrqt5/replicasets/test-recreate-deployment-589c4bfd,UID:f3ff8019-95e2-11ea-99e8-0242ac110002,ResourceVersion:10537518,Generation:1,CreationTimestamp:2020-05-14 13:00:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment f1708107-95e2-11ea-99e8-0242ac110002 0xc00286666f 0xc002866680}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} May 14 13:00:58.214: INFO: All old ReplicaSets of Deployment "test-recreate-deployment": May 14 13:00:58.214: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5bf7f65dc,GenerateName:,Namespace:e2e-tests-deployment-lrqt5,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-lrqt5/replicasets/test-recreate-deployment-5bf7f65dc,UID:f173cf98-95e2-11ea-99e8-0242ac110002,ResourceVersion:10537509,Generation:2,CreationTimestamp:2020-05-14 13:00:53 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5bf7f65dc,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment f1708107-95e2-11ea-99e8-0242ac110002 0xc002866740 0xc002866741}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5bf7f65dc,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5bf7f65dc,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} May 14 13:00:58.219: INFO: Pod "test-recreate-deployment-589c4bfd-v2t9c" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-589c4bfd-v2t9c,GenerateName:test-recreate-deployment-589c4bfd-,Namespace:e2e-tests-deployment-lrqt5,SelfLink:/api/v1/namespaces/e2e-tests-deployment-lrqt5/pods/test-recreate-deployment-589c4bfd-v2t9c,UID:f3fffb2c-95e2-11ea-99e8-0242ac110002,ResourceVersion:10537521,Generation:0,CreationTimestamp:2020-05-14 13:00:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-recreate-deployment-589c4bfd f3ff8019-95e2-11ea-99e8-0242ac110002 0xc000ee979f 0xc000ee97b0}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-zdnfw {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-zdnfw,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-zdnfw true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc000ee9820} {node.kubernetes.io/unreachable Exists NoExecute 0xc000ee9840}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 13:00:57 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-14 13:00:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-14 13:00:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 13:00:57 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.3,PodIP:,StartTime:2020-05-14 13:00:57 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 14 13:00:58.219: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-deployment-lrqt5" for this suite. May 14 13:01:06.233: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 13:01:06.249: INFO: namespace: e2e-tests-deployment-lrqt5, resource: bindings, ignored listing per whitelist May 14 13:01:06.308: INFO: namespace e2e-tests-deployment-lrqt5 deletion completed in 8.086381681s • [SLOW TEST:13.020 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 14 13:01:06.308: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on tmpfs should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir volume type on tmpfs May 14 13:01:06.470: INFO: Waiting up to 5m0s for pod "pod-f9309f1d-95e2-11ea-9b22-0242ac110018" in namespace "e2e-tests-emptydir-l2j9b" to be "success or failure" May 14 13:01:06.482: INFO: Pod "pod-f9309f1d-95e2-11ea-9b22-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 12.431191ms May 14 13:01:08.510: INFO: Pod "pod-f9309f1d-95e2-11ea-9b22-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03992371s May 14 13:01:10.514: INFO: Pod "pod-f9309f1d-95e2-11ea-9b22-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.044010908s STEP: Saw pod success May 14 13:01:10.514: INFO: Pod "pod-f9309f1d-95e2-11ea-9b22-0242ac110018" satisfied condition "success or failure" May 14 13:01:10.517: INFO: Trying to get logs from node hunter-worker pod pod-f9309f1d-95e2-11ea-9b22-0242ac110018 container test-container: STEP: delete the pod May 14 13:01:10.551: INFO: Waiting for pod pod-f9309f1d-95e2-11ea-9b22-0242ac110018 to disappear May 14 13:01:10.772: INFO: Pod pod-f9309f1d-95e2-11ea-9b22-0242ac110018 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 14 13:01:10.772: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-l2j9b" for this suite. May 14 13:01:17.009: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 13:01:17.043: INFO: namespace: e2e-tests-emptydir-l2j9b, resource: bindings, ignored listing per whitelist May 14 13:01:17.134: INFO: namespace e2e-tests-emptydir-l2j9b deletion completed in 6.356451084s • [SLOW TEST:10.825 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 volume on tmpfs should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 14 13:01:17.134: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-ffcafc02-95e2-11ea-9b22-0242ac110018 STEP: Creating a pod to test consume configMaps May 14 13:01:17.908: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-ffeeb6e7-95e2-11ea-9b22-0242ac110018" in namespace "e2e-tests-projected-k5smj" to be "success or failure" May 14 13:01:17.922: INFO: Pod "pod-projected-configmaps-ffeeb6e7-95e2-11ea-9b22-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 13.452135ms May 14 13:01:20.061: INFO: Pod "pod-projected-configmaps-ffeeb6e7-95e2-11ea-9b22-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.152627293s May 14 13:01:22.064: INFO: Pod "pod-projected-configmaps-ffeeb6e7-95e2-11ea-9b22-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.156218412s STEP: Saw pod success May 14 13:01:22.064: INFO: Pod "pod-projected-configmaps-ffeeb6e7-95e2-11ea-9b22-0242ac110018" satisfied condition "success or failure" May 14 13:01:22.067: INFO: Trying to get logs from node hunter-worker2 pod pod-projected-configmaps-ffeeb6e7-95e2-11ea-9b22-0242ac110018 container projected-configmap-volume-test: STEP: delete the pod May 14 13:01:22.344: INFO: Waiting for pod pod-projected-configmaps-ffeeb6e7-95e2-11ea-9b22-0242ac110018 to disappear May 14 13:01:22.468: INFO: Pod pod-projected-configmaps-ffeeb6e7-95e2-11ea-9b22-0242ac110018 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 14 13:01:22.468: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-k5smj" for this suite. May 14 13:01:28.942: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 13:01:28.989: INFO: namespace: e2e-tests-projected-k5smj, resource: bindings, ignored listing per whitelist May 14 13:01:29.019: INFO: namespace e2e-tests-projected-k5smj deletion completed in 6.548043782s • [SLOW TEST:11.885 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 14 13:01:29.019: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin May 14 13:01:29.198: INFO: Waiting up to 5m0s for pod "downwardapi-volume-06c1eba0-95e3-11ea-9b22-0242ac110018" in namespace "e2e-tests-projected-z968v" to be "success or failure" May 14 13:01:29.258: INFO: Pod "downwardapi-volume-06c1eba0-95e3-11ea-9b22-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 60.17285ms May 14 13:01:31.448: INFO: Pod "downwardapi-volume-06c1eba0-95e3-11ea-9b22-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.250129854s May 14 13:01:33.452: INFO: Pod "downwardapi-volume-06c1eba0-95e3-11ea-9b22-0242ac110018": Phase="Running", Reason="", readiness=true. Elapsed: 4.25416709s May 14 13:01:35.457: INFO: Pod "downwardapi-volume-06c1eba0-95e3-11ea-9b22-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.259363353s STEP: Saw pod success May 14 13:01:35.457: INFO: Pod "downwardapi-volume-06c1eba0-95e3-11ea-9b22-0242ac110018" satisfied condition "success or failure" May 14 13:01:35.460: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-06c1eba0-95e3-11ea-9b22-0242ac110018 container client-container: STEP: delete the pod May 14 13:01:35.524: INFO: Waiting for pod downwardapi-volume-06c1eba0-95e3-11ea-9b22-0242ac110018 to disappear May 14 13:01:35.537: INFO: Pod downwardapi-volume-06c1eba0-95e3-11ea-9b22-0242ac110018 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 14 13:01:35.538: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-z968v" for this suite. May 14 13:01:41.578: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 13:01:41.594: INFO: namespace: e2e-tests-projected-z968v, resource: bindings, ignored listing per whitelist May 14 13:01:41.634: INFO: namespace e2e-tests-projected-z968v deletion completed in 6.093897439s • [SLOW TEST:12.615 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 14 13:01:41.634: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102 [It] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 14 13:01:42.049: INFO: Requires at least 2 nodes (not -1) [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68 May 14 13:01:42.055: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-65j5q/daemonsets","resourceVersion":"10537716"},"items":null} May 14 13:01:42.057: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-65j5q/pods","resourceVersion":"10537716"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 14 13:01:42.064: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-daemonsets-65j5q" for this suite. May 14 13:01:48.111: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 13:01:48.146: INFO: namespace: e2e-tests-daemonsets-65j5q, resource: bindings, ignored listing per whitelist May 14 13:01:48.186: INFO: namespace e2e-tests-daemonsets-65j5q deletion completed in 6.089343099s S [SKIPPING] [6.552 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should rollback without unnecessary restarts [Conformance] [It] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 14 13:01:42.049: Requires at least 2 nodes (not -1) /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:292 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 14 13:01:48.187: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0644 on tmpfs May 14 13:01:48.287: INFO: Waiting up to 5m0s for pod "pod-1223b543-95e3-11ea-9b22-0242ac110018" in namespace "e2e-tests-emptydir-ltlhl" to be "success or failure" May 14 13:01:48.300: INFO: Pod "pod-1223b543-95e3-11ea-9b22-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 12.565064ms May 14 13:01:50.304: INFO: Pod "pod-1223b543-95e3-11ea-9b22-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01689417s May 14 13:01:52.309: INFO: Pod "pod-1223b543-95e3-11ea-9b22-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.020969098s STEP: Saw pod success May 14 13:01:52.309: INFO: Pod "pod-1223b543-95e3-11ea-9b22-0242ac110018" satisfied condition "success or failure" May 14 13:01:52.312: INFO: Trying to get logs from node hunter-worker pod pod-1223b543-95e3-11ea-9b22-0242ac110018 container test-container: STEP: delete the pod May 14 13:01:52.331: INFO: Waiting for pod pod-1223b543-95e3-11ea-9b22-0242ac110018 to disappear May 14 13:01:52.336: INFO: Pod pod-1223b543-95e3-11ea-9b22-0242ac110018 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 14 13:01:52.336: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-ltlhl" for this suite. May 14 13:01:58.391: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 13:01:58.447: INFO: namespace: e2e-tests-emptydir-ltlhl, resource: bindings, ignored listing per whitelist May 14 13:01:58.479: INFO: namespace e2e-tests-emptydir-ltlhl deletion completed in 6.117755032s • [SLOW TEST:10.292 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 14 13:01:58.480: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 14 13:02:26.643: INFO: Container started at 2020-05-14 13:02:01 +0000 UTC, pod became ready at 2020-05-14 13:02:25 +0000 UTC [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 14 13:02:26.643: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-q98fv" for this suite. May 14 13:02:48.664: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 13:02:48.683: INFO: namespace: e2e-tests-container-probe-q98fv, resource: bindings, ignored listing per whitelist May 14 13:02:48.746: INFO: namespace e2e-tests-container-probe-q98fv deletion completed in 22.098996991s • [SLOW TEST:50.267 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 14 13:02:48.747: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward api env vars May 14 13:02:48.871: INFO: Waiting up to 5m0s for pod "downward-api-36409e69-95e3-11ea-9b22-0242ac110018" in namespace "e2e-tests-downward-api-bjs2j" to be "success or failure" May 14 13:02:48.875: INFO: Pod "downward-api-36409e69-95e3-11ea-9b22-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 3.249034ms May 14 13:02:50.881: INFO: Pod "downward-api-36409e69-95e3-11ea-9b22-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009401276s May 14 13:02:52.892: INFO: Pod "downward-api-36409e69-95e3-11ea-9b22-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.020707336s STEP: Saw pod success May 14 13:02:52.892: INFO: Pod "downward-api-36409e69-95e3-11ea-9b22-0242ac110018" satisfied condition "success or failure" May 14 13:02:52.895: INFO: Trying to get logs from node hunter-worker pod downward-api-36409e69-95e3-11ea-9b22-0242ac110018 container dapi-container: STEP: delete the pod May 14 13:02:52.919: INFO: Waiting for pod downward-api-36409e69-95e3-11ea-9b22-0242ac110018 to disappear May 14 13:02:52.924: INFO: Pod downward-api-36409e69-95e3-11ea-9b22-0242ac110018 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 14 13:02:52.924: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-bjs2j" for this suite. May 14 13:02:58.953: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 13:02:59.016: INFO: namespace: e2e-tests-downward-api-bjs2j, resource: bindings, ignored listing per whitelist May 14 13:02:59.045: INFO: namespace e2e-tests-downward-api-bjs2j deletion completed in 6.118990246s • [SLOW TEST:10.299 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38 should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 14 13:02:59.046: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102 [It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 14 13:02:59.164: INFO: Creating simple daemon set daemon-set STEP: Check that daemon pods launch on every node of the cluster. May 14 13:02:59.171: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 14 13:02:59.172: INFO: Number of nodes with available pods: 0 May 14 13:02:59.172: INFO: Node hunter-worker is running more than one daemon pod May 14 13:03:00.177: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 14 13:03:00.180: INFO: Number of nodes with available pods: 0 May 14 13:03:00.180: INFO: Node hunter-worker is running more than one daemon pod May 14 13:03:01.177: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 14 13:03:01.180: INFO: Number of nodes with available pods: 0 May 14 13:03:01.180: INFO: Node hunter-worker is running more than one daemon pod May 14 13:03:02.213: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 14 13:03:02.230: INFO: Number of nodes with available pods: 0 May 14 13:03:02.230: INFO: Node hunter-worker is running more than one daemon pod May 14 13:03:03.201: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 14 13:03:03.204: INFO: Number of nodes with available pods: 1 May 14 13:03:03.204: INFO: Node hunter-worker2 is running more than one daemon pod May 14 13:03:04.176: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 14 13:03:04.179: INFO: Number of nodes with available pods: 2 May 14 13:03:04.179: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Update daemon pods image. STEP: Check that daemon pods images are updated. May 14 13:03:04.236: INFO: Wrong image for pod: daemon-set-5dh7q. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 14 13:03:04.236: INFO: Wrong image for pod: daemon-set-6wddq. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 14 13:03:04.263: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 14 13:03:05.266: INFO: Wrong image for pod: daemon-set-5dh7q. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 14 13:03:05.266: INFO: Wrong image for pod: daemon-set-6wddq. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 14 13:03:05.269: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 14 13:03:06.367: INFO: Wrong image for pod: daemon-set-5dh7q. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 14 13:03:06.367: INFO: Wrong image for pod: daemon-set-6wddq. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 14 13:03:06.371: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 14 13:03:07.268: INFO: Wrong image for pod: daemon-set-5dh7q. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 14 13:03:07.268: INFO: Wrong image for pod: daemon-set-6wddq. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 14 13:03:07.271: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 14 13:03:08.284: INFO: Wrong image for pod: daemon-set-5dh7q. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 14 13:03:08.284: INFO: Wrong image for pod: daemon-set-6wddq. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 14 13:03:08.284: INFO: Pod daemon-set-6wddq is not available May 14 13:03:08.288: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 14 13:03:09.266: INFO: Wrong image for pod: daemon-set-5dh7q. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 14 13:03:09.266: INFO: Wrong image for pod: daemon-set-6wddq. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 14 13:03:09.266: INFO: Pod daemon-set-6wddq is not available May 14 13:03:09.269: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 14 13:03:10.282: INFO: Wrong image for pod: daemon-set-5dh7q. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 14 13:03:10.282: INFO: Wrong image for pod: daemon-set-6wddq. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 14 13:03:10.282: INFO: Pod daemon-set-6wddq is not available May 14 13:03:10.285: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 14 13:03:11.267: INFO: Wrong image for pod: daemon-set-5dh7q. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 14 13:03:11.267: INFO: Wrong image for pod: daemon-set-6wddq. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 14 13:03:11.267: INFO: Pod daemon-set-6wddq is not available May 14 13:03:11.270: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 14 13:03:12.277: INFO: Wrong image for pod: daemon-set-5dh7q. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 14 13:03:12.277: INFO: Pod daemon-set-ps6ck is not available May 14 13:03:12.281: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 14 13:03:13.267: INFO: Wrong image for pod: daemon-set-5dh7q. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 14 13:03:13.267: INFO: Pod daemon-set-ps6ck is not available May 14 13:03:13.271: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 14 13:03:14.302: INFO: Wrong image for pod: daemon-set-5dh7q. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 14 13:03:14.302: INFO: Pod daemon-set-ps6ck is not available May 14 13:03:14.369: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 14 13:03:15.267: INFO: Wrong image for pod: daemon-set-5dh7q. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 14 13:03:15.271: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 14 13:03:16.267: INFO: Wrong image for pod: daemon-set-5dh7q. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 14 13:03:16.271: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 14 13:03:17.267: INFO: Wrong image for pod: daemon-set-5dh7q. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 14 13:03:17.267: INFO: Pod daemon-set-5dh7q is not available May 14 13:03:17.272: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 14 13:03:18.268: INFO: Pod daemon-set-c2rfn is not available May 14 13:03:18.271: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node STEP: Check that daemon pods are still running on every node of the cluster. May 14 13:03:18.274: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 14 13:03:18.275: INFO: Number of nodes with available pods: 1 May 14 13:03:18.275: INFO: Node hunter-worker is running more than one daemon pod May 14 13:03:19.279: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 14 13:03:19.282: INFO: Number of nodes with available pods: 1 May 14 13:03:19.282: INFO: Node hunter-worker is running more than one daemon pod May 14 13:03:20.327: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 14 13:03:20.513: INFO: Number of nodes with available pods: 1 May 14 13:03:20.513: INFO: Node hunter-worker is running more than one daemon pod May 14 13:03:21.279: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 14 13:03:21.282: INFO: Number of nodes with available pods: 1 May 14 13:03:21.282: INFO: Node hunter-worker is running more than one daemon pod May 14 13:03:22.280: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 14 13:03:22.284: INFO: Number of nodes with available pods: 2 May 14 13:03:22.284: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-kw8f8, will wait for the garbage collector to delete the pods May 14 13:03:22.357: INFO: Deleting DaemonSet.extensions daemon-set took: 6.001655ms May 14 13:03:22.457: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.419837ms May 14 13:03:31.769: INFO: Number of nodes with available pods: 0 May 14 13:03:31.769: INFO: Number of running nodes: 0, number of available pods: 0 May 14 13:03:31.772: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-kw8f8/daemonsets","resourceVersion":"10538081"},"items":null} May 14 13:03:31.774: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-kw8f8/pods","resourceVersion":"10538081"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 14 13:03:31.781: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-daemonsets-kw8f8" for this suite. May 14 13:03:37.833: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 13:03:37.896: INFO: namespace: e2e-tests-daemonsets-kw8f8, resource: bindings, ignored listing per whitelist May 14 13:03:37.955: INFO: namespace e2e-tests-daemonsets-kw8f8 deletion completed in 6.171562835s • [SLOW TEST:38.909 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 14 13:03:37.955: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-map-53912145-95e3-11ea-9b22-0242ac110018 STEP: Creating a pod to test consume secrets May 14 13:03:38.051: INFO: Waiting up to 5m0s for pod "pod-secrets-53927770-95e3-11ea-9b22-0242ac110018" in namespace "e2e-tests-secrets-pcgbf" to be "success or failure" May 14 13:03:38.081: INFO: Pod "pod-secrets-53927770-95e3-11ea-9b22-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 29.831785ms May 14 13:03:40.135: INFO: Pod "pod-secrets-53927770-95e3-11ea-9b22-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.083550909s May 14 13:03:42.309: INFO: Pod "pod-secrets-53927770-95e3-11ea-9b22-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.257667424s STEP: Saw pod success May 14 13:03:42.309: INFO: Pod "pod-secrets-53927770-95e3-11ea-9b22-0242ac110018" satisfied condition "success or failure" May 14 13:03:42.311: INFO: Trying to get logs from node hunter-worker pod pod-secrets-53927770-95e3-11ea-9b22-0242ac110018 container secret-volume-test: STEP: delete the pod May 14 13:03:42.333: INFO: Waiting for pod pod-secrets-53927770-95e3-11ea-9b22-0242ac110018 to disappear May 14 13:03:42.354: INFO: Pod pod-secrets-53927770-95e3-11ea-9b22-0242ac110018 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 14 13:03:42.354: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-pcgbf" for this suite. May 14 13:03:48.383: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 13:03:48.443: INFO: namespace: e2e-tests-secrets-pcgbf, resource: bindings, ignored listing per whitelist May 14 13:03:48.470: INFO: namespace e2e-tests-secrets-pcgbf deletion completed in 6.113663256s • [SLOW TEST:10.515 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 14 13:03:48.471: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 14 13:03:48.587: INFO: Creating ReplicaSet my-hostname-basic-59db3346-95e3-11ea-9b22-0242ac110018 May 14 13:03:48.608: INFO: Pod name my-hostname-basic-59db3346-95e3-11ea-9b22-0242ac110018: Found 0 pods out of 1 May 14 13:03:53.613: INFO: Pod name my-hostname-basic-59db3346-95e3-11ea-9b22-0242ac110018: Found 1 pods out of 1 May 14 13:03:53.613: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-59db3346-95e3-11ea-9b22-0242ac110018" is running May 14 13:03:53.616: INFO: Pod "my-hostname-basic-59db3346-95e3-11ea-9b22-0242ac110018-k7g4j" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-14 13:03:48 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-14 13:03:51 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-14 13:03:51 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-14 13:03:48 +0000 UTC Reason: Message:}]) May 14 13:03:53.616: INFO: Trying to dial the pod May 14 13:03:58.628: INFO: Controller my-hostname-basic-59db3346-95e3-11ea-9b22-0242ac110018: Got expected result from replica 1 [my-hostname-basic-59db3346-95e3-11ea-9b22-0242ac110018-k7g4j]: "my-hostname-basic-59db3346-95e3-11ea-9b22-0242ac110018-k7g4j", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 14 13:03:58.628: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-replicaset-gfj94" for this suite. May 14 13:04:04.642: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 13:04:04.691: INFO: namespace: e2e-tests-replicaset-gfj94, resource: bindings, ignored listing per whitelist May 14 13:04:04.768: INFO: namespace e2e-tests-replicaset-gfj94 deletion completed in 6.136807997s • [SLOW TEST:16.298 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-storage] Downward API volume should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 14 13:04:04.769: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin May 14 13:04:04.903: INFO: Waiting up to 5m0s for pod "downwardapi-volume-6392eb0c-95e3-11ea-9b22-0242ac110018" in namespace "e2e-tests-downward-api-wxpcj" to be "success or failure" May 14 13:04:04.907: INFO: Pod "downwardapi-volume-6392eb0c-95e3-11ea-9b22-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 3.616435ms May 14 13:04:06.925: INFO: Pod "downwardapi-volume-6392eb0c-95e3-11ea-9b22-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021872858s May 14 13:04:08.929: INFO: Pod "downwardapi-volume-6392eb0c-95e3-11ea-9b22-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 4.025310563s May 14 13:04:10.932: INFO: Pod "downwardapi-volume-6392eb0c-95e3-11ea-9b22-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.028598069s STEP: Saw pod success May 14 13:04:10.932: INFO: Pod "downwardapi-volume-6392eb0c-95e3-11ea-9b22-0242ac110018" satisfied condition "success or failure" May 14 13:04:10.935: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-6392eb0c-95e3-11ea-9b22-0242ac110018 container client-container: STEP: delete the pod May 14 13:04:10.999: INFO: Waiting for pod downwardapi-volume-6392eb0c-95e3-11ea-9b22-0242ac110018 to disappear May 14 13:04:11.039: INFO: Pod downwardapi-volume-6392eb0c-95e3-11ea-9b22-0242ac110018 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 14 13:04:11.039: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-wxpcj" for this suite. May 14 13:04:17.055: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 13:04:17.104: INFO: namespace: e2e-tests-downward-api-wxpcj, resource: bindings, ignored listing per whitelist May 14 13:04:17.135: INFO: namespace e2e-tests-downward-api-wxpcj deletion completed in 6.092325348s • [SLOW TEST:12.366 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 14 13:04:17.135: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin May 14 13:04:17.322: INFO: Waiting up to 5m0s for pod "downwardapi-volume-6af4dbce-95e3-11ea-9b22-0242ac110018" in namespace "e2e-tests-projected-jmk4k" to be "success or failure" May 14 13:04:17.325: INFO: Pod "downwardapi-volume-6af4dbce-95e3-11ea-9b22-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 3.281118ms May 14 13:04:19.417: INFO: Pod "downwardapi-volume-6af4dbce-95e3-11ea-9b22-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.09525818s May 14 13:04:21.854: INFO: Pod "downwardapi-volume-6af4dbce-95e3-11ea-9b22-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.532307295s STEP: Saw pod success May 14 13:04:21.854: INFO: Pod "downwardapi-volume-6af4dbce-95e3-11ea-9b22-0242ac110018" satisfied condition "success or failure" May 14 13:04:21.857: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-6af4dbce-95e3-11ea-9b22-0242ac110018 container client-container: STEP: delete the pod May 14 13:04:21.915: INFO: Waiting for pod downwardapi-volume-6af4dbce-95e3-11ea-9b22-0242ac110018 to disappear May 14 13:04:22.051: INFO: Pod downwardapi-volume-6af4dbce-95e3-11ea-9b22-0242ac110018 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 14 13:04:22.051: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-jmk4k" for this suite. May 14 13:04:28.098: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 13:04:28.125: INFO: namespace: e2e-tests-projected-jmk4k, resource: bindings, ignored listing per whitelist May 14 13:04:28.177: INFO: namespace e2e-tests-projected-jmk4k deletion completed in 6.122947262s • [SLOW TEST:11.042 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-apps] Deployment deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 14 13:04:28.177: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65 [It] deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 14 13:04:28.370: INFO: Pod name rollover-pod: Found 0 pods out of 1 May 14 13:04:33.381: INFO: Pod name rollover-pod: Found 1 pods out of 1 STEP: ensuring each pod is running May 14 13:04:33.381: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready May 14 13:04:35.387: INFO: Creating deployment "test-rollover-deployment" May 14 13:04:35.459: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations May 14 13:04:37.464: INFO: Check revision of new replica set for deployment "test-rollover-deployment" May 14 13:04:37.469: INFO: Ensure that both replica sets have 1 created replica May 14 13:04:37.474: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update May 14 13:04:37.507: INFO: Updating deployment test-rollover-deployment May 14 13:04:37.507: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller May 14 13:04:39.530: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 May 14 13:04:39.536: INFO: Make sure deployment "test-rollover-deployment" is complete May 14 13:04:39.542: INFO: all replica sets need to contain the pod-template-hash label May 14 13:04:39.542: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725058275, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725058275, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725058277, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725058275, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} May 14 13:04:41.551: INFO: all replica sets need to contain the pod-template-hash label May 14 13:04:41.551: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725058275, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725058275, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725058281, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725058275, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} May 14 13:04:43.549: INFO: all replica sets need to contain the pod-template-hash label May 14 13:04:43.549: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725058275, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725058275, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725058281, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725058275, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} May 14 13:04:45.548: INFO: all replica sets need to contain the pod-template-hash label May 14 13:04:45.548: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725058275, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725058275, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725058281, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725058275, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} May 14 13:04:47.549: INFO: all replica sets need to contain the pod-template-hash label May 14 13:04:47.549: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725058275, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725058275, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725058281, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725058275, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} May 14 13:04:49.549: INFO: all replica sets need to contain the pod-template-hash label May 14 13:04:49.549: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725058275, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725058275, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725058281, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725058275, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} May 14 13:04:51.551: INFO: May 14 13:04:51.551: INFO: Ensure that both old replica sets have no replicas [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59 May 14 13:04:51.832: INFO: Deployment "test-rollover-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment,GenerateName:,Namespace:e2e-tests-deployment-dcxlf,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-dcxlf/deployments/test-rollover-deployment,UID:75c070e5-95e3-11ea-99e8-0242ac110002,ResourceVersion:10538444,Generation:2,CreationTimestamp:2020-05-14 13:04:35 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-05-14 13:04:35 +0000 UTC 2020-05-14 13:04:35 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-05-14 13:04:51 +0000 UTC 2020-05-14 13:04:35 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rollover-deployment-5b8479fdb6" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},} May 14 13:04:51.836: INFO: New ReplicaSet "test-rollover-deployment-5b8479fdb6" of Deployment "test-rollover-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-5b8479fdb6,GenerateName:,Namespace:e2e-tests-deployment-dcxlf,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-dcxlf/replicasets/test-rollover-deployment-5b8479fdb6,UID:7703f715-95e3-11ea-99e8-0242ac110002,ResourceVersion:10538434,Generation:2,CreationTimestamp:2020-05-14 13:04:37 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 75c070e5-95e3-11ea-99e8-0242ac110002 0xc001c59027 0xc001c59028}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} May 14 13:04:51.836: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": May 14 13:04:51.837: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-controller,GenerateName:,Namespace:e2e-tests-deployment-dcxlf,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-dcxlf/replicasets/test-rollover-controller,UID:718d0bdb-95e3-11ea-99e8-0242ac110002,ResourceVersion:10538442,Generation:2,CreationTimestamp:2020-05-14 13:04:28 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 75c070e5-95e3-11ea-99e8-0242ac110002 0xc001c58ce7 0xc001c58ce8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} May 14 13:04:51.837: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-58494b7559,GenerateName:,Namespace:e2e-tests-deployment-dcxlf,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-dcxlf/replicasets/test-rollover-deployment-58494b7559,UID:75cce1f9-95e3-11ea-99e8-0242ac110002,ResourceVersion:10538394,Generation:2,CreationTimestamp:2020-05-14 13:04:35 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 58494b7559,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 75c070e5-95e3-11ea-99e8-0242ac110002 0xc001c58da7 0xc001c58da8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 58494b7559,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 58494b7559,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} May 14 13:04:51.840: INFO: Pod "test-rollover-deployment-5b8479fdb6-q4kvf" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-5b8479fdb6-q4kvf,GenerateName:test-rollover-deployment-5b8479fdb6-,Namespace:e2e-tests-deployment-dcxlf,SelfLink:/api/v1/namespaces/e2e-tests-deployment-dcxlf/pods/test-rollover-deployment-5b8479fdb6-q4kvf,UID:771c29fb-95e3-11ea-99e8-0242ac110002,ResourceVersion:10538412,Generation:0,CreationTimestamp:2020-05-14 13:04:37 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rollover-deployment-5b8479fdb6 7703f715-95e3-11ea-99e8-0242ac110002 0xc001e82137 0xc001e82138}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-q8q4z {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-q8q4z,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-q8q4z true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001e821b0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001e821d0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 13:04:37 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 13:04:41 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 13:04:41 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-14 13:04:37 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.4,PodIP:10.244.2.222,StartTime:2020-05-14 13:04:37 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-05-14 13:04:40 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 containerd://33ee05003b17a0e4e51b337f66a753318b5cea7e8be868adbdaebee6a12b3dd5}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 14 13:04:51.840: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-deployment-dcxlf" for this suite. May 14 13:04:59.852: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 13:04:59.892: INFO: namespace: e2e-tests-deployment-dcxlf, resource: bindings, ignored listing per whitelist May 14 13:04:59.911: INFO: namespace e2e-tests-deployment-dcxlf deletion completed in 8.067366532s • [SLOW TEST:31.733 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 14 13:04:59.911: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0666 on node default medium May 14 13:05:00.156: INFO: Waiting up to 5m0s for pod "pod-8480bcac-95e3-11ea-9b22-0242ac110018" in namespace "e2e-tests-emptydir-g8lzx" to be "success or failure" May 14 13:05:00.243: INFO: Pod "pod-8480bcac-95e3-11ea-9b22-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 87.84088ms May 14 13:05:02.248: INFO: Pod "pod-8480bcac-95e3-11ea-9b22-0242ac110018": Phase="Pending", Reason="", readiness=false. Elapsed: 2.092320748s May 14 13:05:04.252: INFO: Pod "pod-8480bcac-95e3-11ea-9b22-0242ac110018": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.096540468s STEP: Saw pod success May 14 13:05:04.252: INFO: Pod "pod-8480bcac-95e3-11ea-9b22-0242ac110018" satisfied condition "success or failure" May 14 13:05:04.256: INFO: Trying to get logs from node hunter-worker pod pod-8480bcac-95e3-11ea-9b22-0242ac110018 container test-container: STEP: delete the pod May 14 13:05:04.389: INFO: Waiting for pod pod-8480bcac-95e3-11ea-9b22-0242ac110018 to disappear May 14 13:05:04.400: INFO: Pod pod-8480bcac-95e3-11ea-9b22-0242ac110018 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 14 13:05:04.400: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-g8lzx" for this suite. May 14 13:05:10.415: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 14 13:05:10.455: INFO: namespace: e2e-tests-emptydir-g8lzx, resource: bindings, ignored listing per whitelist May 14 13:05:10.476: INFO: namespace e2e-tests-emptydir-g8lzx deletion completed in 6.071387484s • [SLOW TEST:10.565 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSMay 14 13:05:10.476: INFO: Running AfterSuite actions on all nodes May 14 13:05:10.476: INFO: Running AfterSuite actions on node 1 May 14 13:05:10.476: INFO: Skipping dumping logs from cluster Summarizing 1 Failure: [Fail] [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod [It] should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:123 Ran 200 of 2164 Specs in 8295.738 seconds FAIL! -- 199 Passed | 1 Failed | 0 Pending | 1964 Skipped --- FAIL: TestE2E (8295.94s) FAIL