I0502 10:46:44.026478 6 e2e.go:224] Starting e2e run "3666bfb6-8c62-11ea-8045-0242ac110017" on Ginkgo node 1 Running Suite: Kubernetes e2e suite =================================== Random Seed: 1588416403 - Will randomize all specs Will run 201 of 2164 specs May 2 10:46:44.220: INFO: >>> kubeConfig: /root/.kube/config May 2 10:46:44.224: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable May 2 10:46:44.242: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready May 2 10:46:44.279: INFO: 12 / 12 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) May 2 10:46:44.279: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. May 2 10:46:44.279: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start May 2 10:46:44.291: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) May 2 10:46:44.291: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) May 2 10:46:44.291: INFO: e2e test version: v1.13.12 May 2 10:46:44.292: INFO: kube-apiserver version: v1.13.12 SSSSSSSSSS ------------------------------ [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 2 10:46:44.292: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap May 2 10:46:44.458: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-upd-36f0a839-8c62-11ea-8045-0242ac110017 STEP: Creating the pod STEP: Updating configmap configmap-test-upd-36f0a839-8c62-11ea-8045-0242ac110017 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 2 10:46:50.516: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-c2gbd" for this suite. May 2 10:47:12.540: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 10:47:12.583: INFO: namespace: e2e-tests-configmap-c2gbd, resource: bindings, ignored listing per whitelist May 2 10:47:12.620: INFO: namespace e2e-tests-configmap-c2gbd deletion completed in 22.099631757s • [SLOW TEST:28.328 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 2 10:47:12.620: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0666 on tmpfs May 2 10:47:12.749: INFO: Waiting up to 5m0s for pod "pod-47cb2bd8-8c62-11ea-8045-0242ac110017" in namespace "e2e-tests-emptydir-nrppn" to be "success or failure" May 2 10:47:12.753: INFO: Pod "pod-47cb2bd8-8c62-11ea-8045-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 3.928528ms May 2 10:47:14.757: INFO: Pod "pod-47cb2bd8-8c62-11ea-8045-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.0079073s May 2 10:47:16.762: INFO: Pod "pod-47cb2bd8-8c62-11ea-8045-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012304077s STEP: Saw pod success May 2 10:47:16.762: INFO: Pod "pod-47cb2bd8-8c62-11ea-8045-0242ac110017" satisfied condition "success or failure" May 2 10:47:16.765: INFO: Trying to get logs from node hunter-worker2 pod pod-47cb2bd8-8c62-11ea-8045-0242ac110017 container test-container: STEP: delete the pod May 2 10:47:16.807: INFO: Waiting for pod pod-47cb2bd8-8c62-11ea-8045-0242ac110017 to disappear May 2 10:47:16.939: INFO: Pod pod-47cb2bd8-8c62-11ea-8045-0242ac110017 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 2 10:47:16.939: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-nrppn" for this suite. May 2 10:47:22.965: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 10:47:22.977: INFO: namespace: e2e-tests-emptydir-nrppn, resource: bindings, ignored listing per whitelist May 2 10:47:23.058: INFO: namespace e2e-tests-emptydir-nrppn deletion completed in 6.115000271s • [SLOW TEST:10.438 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 2 10:47:23.059: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating the pod May 2 10:47:27.719: INFO: Successfully updated pod "annotationupdate4e0375e2-8c62-11ea-8045-0242ac110017" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 2 10:47:29.752: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-fbggh" for this suite. May 2 10:47:51.774: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 10:47:51.794: INFO: namespace: e2e-tests-downward-api-fbggh, resource: bindings, ignored listing per whitelist May 2 10:47:51.846: INFO: namespace e2e-tests-downward-api-fbggh deletion completed in 22.089576916s • [SLOW TEST:28.787 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 2 10:47:51.846: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0777 on node default medium May 2 10:47:51.966: INFO: Waiting up to 5m0s for pod "pod-5f2a36ac-8c62-11ea-8045-0242ac110017" in namespace "e2e-tests-emptydir-nmr8p" to be "success or failure" May 2 10:47:51.970: INFO: Pod "pod-5f2a36ac-8c62-11ea-8045-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 3.876844ms May 2 10:47:53.974: INFO: Pod "pod-5f2a36ac-8c62-11ea-8045-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008103252s May 2 10:47:55.999: INFO: Pod "pod-5f2a36ac-8c62-11ea-8045-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.032840782s STEP: Saw pod success May 2 10:47:55.999: INFO: Pod "pod-5f2a36ac-8c62-11ea-8045-0242ac110017" satisfied condition "success or failure" May 2 10:47:56.002: INFO: Trying to get logs from node hunter-worker2 pod pod-5f2a36ac-8c62-11ea-8045-0242ac110017 container test-container: STEP: delete the pod May 2 10:47:56.067: INFO: Waiting for pod pod-5f2a36ac-8c62-11ea-8045-0242ac110017 to disappear May 2 10:47:56.082: INFO: Pod pod-5f2a36ac-8c62-11ea-8045-0242ac110017 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 2 10:47:56.082: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-nmr8p" for this suite. May 2 10:48:02.173: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 10:48:02.205: INFO: namespace: e2e-tests-emptydir-nmr8p, resource: bindings, ignored listing per whitelist May 2 10:48:02.256: INFO: namespace e2e-tests-emptydir-nmr8p deletion completed in 6.170827079s • [SLOW TEST:10.410 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl label should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 2 10:48:02.256: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1052 STEP: creating the pod May 2 10:48:02.397: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-pmbd8' May 2 10:48:04.967: INFO: stderr: "" May 2 10:48:04.967: INFO: stdout: "pod/pause created\n" May 2 10:48:04.967: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] May 2 10:48:04.967: INFO: Waiting up to 5m0s for pod "pause" in namespace "e2e-tests-kubectl-pmbd8" to be "running and ready" May 2 10:48:04.982: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 15.06018ms May 2 10:48:06.987: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019575985s May 2 10:48:08.991: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 4.02374651s May 2 10:48:08.991: INFO: Pod "pause" satisfied condition "running and ready" May 2 10:48:08.991: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] [It] should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: adding the label testing-label with value testing-label-value to a pod May 2 10:48:08.991: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=e2e-tests-kubectl-pmbd8' May 2 10:48:09.096: INFO: stderr: "" May 2 10:48:09.096: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod has the label testing-label with the value testing-label-value May 2 10:48:09.096: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=e2e-tests-kubectl-pmbd8' May 2 10:48:09.214: INFO: stderr: "" May 2 10:48:09.214: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 5s testing-label-value\n" STEP: removing the label testing-label of a pod May 2 10:48:09.214: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=e2e-tests-kubectl-pmbd8' May 2 10:48:09.345: INFO: stderr: "" May 2 10:48:09.345: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod doesn't have the label testing-label May 2 10:48:09.346: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=e2e-tests-kubectl-pmbd8' May 2 10:48:09.472: INFO: stderr: "" May 2 10:48:09.472: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 5s \n" [AfterEach] [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1059 STEP: using delete to clean up resources May 2 10:48:09.472: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-pmbd8' May 2 10:48:09.585: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 2 10:48:09.585: INFO: stdout: "pod \"pause\" force deleted\n" May 2 10:48:09.585: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=e2e-tests-kubectl-pmbd8' May 2 10:48:09.859: INFO: stderr: "No resources found.\n" May 2 10:48:09.859: INFO: stdout: "" May 2 10:48:09.859: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=e2e-tests-kubectl-pmbd8 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 2 10:48:09.957: INFO: stderr: "" May 2 10:48:09.957: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 2 10:48:09.957: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-pmbd8" for this suite. May 2 10:48:15.986: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 10:48:16.044: INFO: namespace: e2e-tests-kubectl-pmbd8, resource: bindings, ignored listing per whitelist May 2 10:48:16.061: INFO: namespace e2e-tests-kubectl-pmbd8 deletion completed in 6.100107473s • [SLOW TEST:13.804 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 2 10:48:16.061: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-6d99ae8a-8c62-11ea-8045-0242ac110017 STEP: Creating a pod to test consume configMaps May 2 10:48:16.195: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-6d9c66f7-8c62-11ea-8045-0242ac110017" in namespace "e2e-tests-projected-kxt97" to be "success or failure" May 2 10:48:16.210: INFO: Pod "pod-projected-configmaps-6d9c66f7-8c62-11ea-8045-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 15.823081ms May 2 10:48:18.215: INFO: Pod "pod-projected-configmaps-6d9c66f7-8c62-11ea-8045-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020088448s May 2 10:48:20.219: INFO: Pod "pod-projected-configmaps-6d9c66f7-8c62-11ea-8045-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.02458263s STEP: Saw pod success May 2 10:48:20.219: INFO: Pod "pod-projected-configmaps-6d9c66f7-8c62-11ea-8045-0242ac110017" satisfied condition "success or failure" May 2 10:48:20.223: INFO: Trying to get logs from node hunter-worker2 pod pod-projected-configmaps-6d9c66f7-8c62-11ea-8045-0242ac110017 container projected-configmap-volume-test: STEP: delete the pod May 2 10:48:20.254: INFO: Waiting for pod pod-projected-configmaps-6d9c66f7-8c62-11ea-8045-0242ac110017 to disappear May 2 10:48:20.287: INFO: Pod pod-projected-configmaps-6d9c66f7-8c62-11ea-8045-0242ac110017 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 2 10:48:20.287: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-kxt97" for this suite. May 2 10:48:26.322: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 10:48:26.341: INFO: namespace: e2e-tests-projected-kxt97, resource: bindings, ignored listing per whitelist May 2 10:48:26.429: INFO: namespace e2e-tests-projected-kxt97 deletion completed in 6.124130714s • [SLOW TEST:10.368 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 2 10:48:26.429: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with configmap pod with mountPath of existing file [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod pod-subpath-test-configmap-ft88 STEP: Creating a pod to test atomic-volume-subpath May 2 10:48:26.541: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-ft88" in namespace "e2e-tests-subpath-9r9d4" to be "success or failure" May 2 10:48:26.544: INFO: Pod "pod-subpath-test-configmap-ft88": Phase="Pending", Reason="", readiness=false. Elapsed: 3.302911ms May 2 10:48:28.628: INFO: Pod "pod-subpath-test-configmap-ft88": Phase="Pending", Reason="", readiness=false. Elapsed: 2.086557483s May 2 10:48:30.632: INFO: Pod "pod-subpath-test-configmap-ft88": Phase="Pending", Reason="", readiness=false. Elapsed: 4.091174191s May 2 10:48:32.636: INFO: Pod "pod-subpath-test-configmap-ft88": Phase="Running", Reason="", readiness=true. Elapsed: 6.095161764s May 2 10:48:34.641: INFO: Pod "pod-subpath-test-configmap-ft88": Phase="Running", Reason="", readiness=false. Elapsed: 8.099790478s May 2 10:48:36.646: INFO: Pod "pod-subpath-test-configmap-ft88": Phase="Running", Reason="", readiness=false. Elapsed: 10.104481763s May 2 10:48:38.650: INFO: Pod "pod-subpath-test-configmap-ft88": Phase="Running", Reason="", readiness=false. Elapsed: 12.109089682s May 2 10:48:40.655: INFO: Pod "pod-subpath-test-configmap-ft88": Phase="Running", Reason="", readiness=false. Elapsed: 14.113748641s May 2 10:48:42.659: INFO: Pod "pod-subpath-test-configmap-ft88": Phase="Running", Reason="", readiness=false. Elapsed: 16.117321673s May 2 10:48:44.663: INFO: Pod "pod-subpath-test-configmap-ft88": Phase="Running", Reason="", readiness=false. Elapsed: 18.12184954s May 2 10:48:46.667: INFO: Pod "pod-subpath-test-configmap-ft88": Phase="Running", Reason="", readiness=false. Elapsed: 20.12614015s May 2 10:48:48.671: INFO: Pod "pod-subpath-test-configmap-ft88": Phase="Running", Reason="", readiness=false. Elapsed: 22.130265474s May 2 10:48:50.675: INFO: Pod "pod-subpath-test-configmap-ft88": Phase="Running", Reason="", readiness=false. Elapsed: 24.133859545s May 2 10:48:52.724: INFO: Pod "pod-subpath-test-configmap-ft88": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.182805456s STEP: Saw pod success May 2 10:48:52.724: INFO: Pod "pod-subpath-test-configmap-ft88" satisfied condition "success or failure" May 2 10:48:52.727: INFO: Trying to get logs from node hunter-worker pod pod-subpath-test-configmap-ft88 container test-container-subpath-configmap-ft88: STEP: delete the pod May 2 10:48:52.881: INFO: Waiting for pod pod-subpath-test-configmap-ft88 to disappear May 2 10:48:52.888: INFO: Pod pod-subpath-test-configmap-ft88 no longer exists STEP: Deleting pod pod-subpath-test-configmap-ft88 May 2 10:48:52.888: INFO: Deleting pod "pod-subpath-test-configmap-ft88" in namespace "e2e-tests-subpath-9r9d4" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 2 10:48:52.951: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-subpath-9r9d4" for this suite. May 2 10:49:01.088: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 10:49:01.191: INFO: namespace: e2e-tests-subpath-9r9d4, resource: bindings, ignored listing per whitelist May 2 10:49:01.195: INFO: namespace e2e-tests-subpath-9r9d4 deletion completed in 8.239899293s • [SLOW TEST:34.766 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with configmap pod with mountPath of existing file [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 2 10:49:01.195: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-6vjt9 May 2 10:49:05.466: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-6vjt9 STEP: checking the pod's current state and verifying that restartCount is present May 2 10:49:05.469: INFO: Initial restart count of pod liveness-http is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 2 10:53:06.956: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-6vjt9" for this suite. May 2 10:53:12.979: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 10:53:13.036: INFO: namespace: e2e-tests-container-probe-6vjt9, resource: bindings, ignored listing per whitelist May 2 10:53:13.047: INFO: namespace e2e-tests-container-probe-6vjt9 deletion completed in 6.081567236s • [SLOW TEST:251.852 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 2 10:53:13.047: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-map-1e9ccd7d-8c63-11ea-8045-0242ac110017 STEP: Creating a pod to test consume configMaps May 2 10:53:13.203: INFO: Waiting up to 5m0s for pod "pod-configmaps-1e9d62fb-8c63-11ea-8045-0242ac110017" in namespace "e2e-tests-configmap-jppgf" to be "success or failure" May 2 10:53:13.210: INFO: Pod "pod-configmaps-1e9d62fb-8c63-11ea-8045-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 7.681902ms May 2 10:53:15.399: INFO: Pod "pod-configmaps-1e9d62fb-8c63-11ea-8045-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.196002919s May 2 10:53:17.402: INFO: Pod "pod-configmaps-1e9d62fb-8c63-11ea-8045-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.199464436s STEP: Saw pod success May 2 10:53:17.402: INFO: Pod "pod-configmaps-1e9d62fb-8c63-11ea-8045-0242ac110017" satisfied condition "success or failure" May 2 10:53:17.406: INFO: Trying to get logs from node hunter-worker2 pod pod-configmaps-1e9d62fb-8c63-11ea-8045-0242ac110017 container configmap-volume-test: STEP: delete the pod May 2 10:53:17.495: INFO: Waiting for pod pod-configmaps-1e9d62fb-8c63-11ea-8045-0242ac110017 to disappear May 2 10:53:17.503: INFO: Pod pod-configmaps-1e9d62fb-8c63-11ea-8045-0242ac110017 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 2 10:53:17.503: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-jppgf" for this suite. May 2 10:53:23.592: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 10:53:23.628: INFO: namespace: e2e-tests-configmap-jppgf, resource: bindings, ignored listing per whitelist May 2 10:53:23.653: INFO: namespace e2e-tests-configmap-jppgf deletion completed in 6.146412232s • [SLOW TEST:10.606 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl replace should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 2 10:53:23.653: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1563 [It] should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine May 2 10:53:23.739: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --labels=run=e2e-test-nginx-pod --namespace=e2e-tests-kubectl-bv4vg' May 2 10:53:23.864: INFO: stderr: "" May 2 10:53:23.864: INFO: stdout: "pod/e2e-test-nginx-pod created\n" STEP: verifying the pod e2e-test-nginx-pod is running STEP: verifying the pod e2e-test-nginx-pod was created May 2 10:53:28.914: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod e2e-test-nginx-pod --namespace=e2e-tests-kubectl-bv4vg -o json' May 2 10:53:29.007: INFO: stderr: "" May 2 10:53:29.007: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"creationTimestamp\": \"2020-05-02T10:53:23Z\",\n \"labels\": {\n \"run\": \"e2e-test-nginx-pod\"\n },\n \"name\": \"e2e-test-nginx-pod\",\n \"namespace\": \"e2e-tests-kubectl-bv4vg\",\n \"resourceVersion\": \"8332944\",\n \"selfLink\": \"/api/v1/namespaces/e2e-tests-kubectl-bv4vg/pods/e2e-test-nginx-pod\",\n \"uid\": \"24fdaa15-8c63-11ea-99e8-0242ac110002\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"docker.io/library/nginx:1.14-alpine\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-nginx-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"default-token-b5lkb\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"hunter-worker2\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"default-token-b5lkb\",\n \"secret\": {\n \"defaultMode\": 420,\n \"secretName\": \"default-token-b5lkb\"\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-05-02T10:53:23Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-05-02T10:53:27Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-05-02T10:53:27Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-05-02T10:53:23Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"containerd://b621b0ff3e17898d22421e532ee7052d75beea2db0d2b5fc9febd699f7e631b2\",\n \"image\": \"docker.io/library/nginx:1.14-alpine\",\n \"imageID\": \"docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7\",\n \"lastState\": {},\n \"name\": \"e2e-test-nginx-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2020-05-02T10:53:26Z\"\n }\n }\n }\n ],\n \"hostIP\": \"172.17.0.4\",\n \"phase\": \"Running\",\n \"podIP\": \"10.244.2.148\",\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2020-05-02T10:53:23Z\"\n }\n}\n" STEP: replace the image in the pod May 2 10:53:29.007: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config replace -f - --namespace=e2e-tests-kubectl-bv4vg' May 2 10:53:29.285: INFO: stderr: "" May 2 10:53:29.285: INFO: stdout: "pod/e2e-test-nginx-pod replaced\n" STEP: verifying the pod e2e-test-nginx-pod has the right image docker.io/library/busybox:1.29 [AfterEach] [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1568 May 2 10:53:29.310: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=e2e-tests-kubectl-bv4vg' May 2 10:53:32.323: INFO: stderr: "" May 2 10:53:32.323: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 2 10:53:32.323: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-bv4vg" for this suite. May 2 10:53:38.346: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 10:53:38.363: INFO: namespace: e2e-tests-kubectl-bv4vg, resource: bindings, ignored listing per whitelist May 2 10:53:38.423: INFO: namespace e2e-tests-kubectl-bv4vg deletion completed in 6.091688233s • [SLOW TEST:14.770 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 2 10:53:38.424: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-zszm6 STEP: creating a selector STEP: Creating the service pods in kubernetes May 2 10:53:38.592: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods May 2 10:54:04.787: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.2.149:8080/hostName | grep -v '^\s*$'] Namespace:e2e-tests-pod-network-test-zszm6 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 2 10:54:04.787: INFO: >>> kubeConfig: /root/.kube/config I0502 10:54:04.820192 6 log.go:172] (0xc001f722c0) (0xc001b80b40) Create stream I0502 10:54:04.820221 6 log.go:172] (0xc001f722c0) (0xc001b80b40) Stream added, broadcasting: 1 I0502 10:54:04.824070 6 log.go:172] (0xc001f722c0) Reply frame received for 1 I0502 10:54:04.824146 6 log.go:172] (0xc001f722c0) (0xc0020d2000) Create stream I0502 10:54:04.824163 6 log.go:172] (0xc001f722c0) (0xc0020d2000) Stream added, broadcasting: 3 I0502 10:54:04.825007 6 log.go:172] (0xc001f722c0) Reply frame received for 3 I0502 10:54:04.825045 6 log.go:172] (0xc001f722c0) (0xc001520000) Create stream I0502 10:54:04.825056 6 log.go:172] (0xc001f722c0) (0xc001520000) Stream added, broadcasting: 5 I0502 10:54:04.825953 6 log.go:172] (0xc001f722c0) Reply frame received for 5 I0502 10:54:04.919247 6 log.go:172] (0xc001f722c0) Data frame received for 3 I0502 10:54:04.919279 6 log.go:172] (0xc0020d2000) (3) Data frame handling I0502 10:54:04.919290 6 log.go:172] (0xc0020d2000) (3) Data frame sent I0502 10:54:04.919303 6 log.go:172] (0xc001f722c0) Data frame received for 3 I0502 10:54:04.919316 6 log.go:172] (0xc0020d2000) (3) Data frame handling I0502 10:54:04.919335 6 log.go:172] (0xc001f722c0) Data frame received for 5 I0502 10:54:04.919342 6 log.go:172] (0xc001520000) (5) Data frame handling I0502 10:54:04.920422 6 log.go:172] (0xc001f722c0) Data frame received for 1 I0502 10:54:04.920434 6 log.go:172] (0xc001b80b40) (1) Data frame handling I0502 10:54:04.920443 6 log.go:172] (0xc001b80b40) (1) Data frame sent I0502 10:54:04.920521 6 log.go:172] (0xc001f722c0) (0xc001b80b40) Stream removed, broadcasting: 1 I0502 10:54:04.920555 6 log.go:172] (0xc001f722c0) Go away received I0502 10:54:04.920638 6 log.go:172] (0xc001f722c0) (0xc001b80b40) Stream removed, broadcasting: 1 I0502 10:54:04.920649 6 log.go:172] (0xc001f722c0) (0xc0020d2000) Stream removed, broadcasting: 3 I0502 10:54:04.920655 6 log.go:172] (0xc001f722c0) (0xc001520000) Stream removed, broadcasting: 5 May 2 10:54:04.920: INFO: Found all expected endpoints: [netserver-0] May 2 10:54:04.923: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.1.139:8080/hostName | grep -v '^\s*$'] Namespace:e2e-tests-pod-network-test-zszm6 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 2 10:54:04.923: INFO: >>> kubeConfig: /root/.kube/config I0502 10:54:04.951071 6 log.go:172] (0xc001f72370) (0xc000d40320) Create stream I0502 10:54:04.951105 6 log.go:172] (0xc001f72370) (0xc000d40320) Stream added, broadcasting: 1 I0502 10:54:04.954090 6 log.go:172] (0xc001f72370) Reply frame received for 1 I0502 10:54:04.954120 6 log.go:172] (0xc001f72370) (0xc001d00140) Create stream I0502 10:54:04.954129 6 log.go:172] (0xc001f72370) (0xc001d00140) Stream added, broadcasting: 3 I0502 10:54:04.954991 6 log.go:172] (0xc001f72370) Reply frame received for 3 I0502 10:54:04.955033 6 log.go:172] (0xc001f72370) (0xc001d001e0) Create stream I0502 10:54:04.955058 6 log.go:172] (0xc001f72370) (0xc001d001e0) Stream added, broadcasting: 5 I0502 10:54:04.955850 6 log.go:172] (0xc001f72370) Reply frame received for 5 I0502 10:54:05.014441 6 log.go:172] (0xc001f72370) Data frame received for 5 I0502 10:54:05.014477 6 log.go:172] (0xc001d001e0) (5) Data frame handling I0502 10:54:05.014523 6 log.go:172] (0xc001f72370) Data frame received for 3 I0502 10:54:05.014551 6 log.go:172] (0xc001d00140) (3) Data frame handling I0502 10:54:05.014567 6 log.go:172] (0xc001d00140) (3) Data frame sent I0502 10:54:05.014581 6 log.go:172] (0xc001f72370) Data frame received for 3 I0502 10:54:05.014593 6 log.go:172] (0xc001d00140) (3) Data frame handling I0502 10:54:05.016031 6 log.go:172] (0xc001f72370) Data frame received for 1 I0502 10:54:05.016057 6 log.go:172] (0xc000d40320) (1) Data frame handling I0502 10:54:05.016069 6 log.go:172] (0xc000d40320) (1) Data frame sent I0502 10:54:05.016081 6 log.go:172] (0xc001f72370) (0xc000d40320) Stream removed, broadcasting: 1 I0502 10:54:05.016119 6 log.go:172] (0xc001f72370) Go away received I0502 10:54:05.016172 6 log.go:172] (0xc001f72370) (0xc000d40320) Stream removed, broadcasting: 1 I0502 10:54:05.016189 6 log.go:172] (0xc001f72370) (0xc001d00140) Stream removed, broadcasting: 3 I0502 10:54:05.016201 6 log.go:172] (0xc001f72370) (0xc001d001e0) Stream removed, broadcasting: 5 May 2 10:54:05.016: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 2 10:54:05.016: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pod-network-test-zszm6" for this suite. May 2 10:54:29.034: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 10:54:29.096: INFO: namespace: e2e-tests-pod-network-test-zszm6, resource: bindings, ignored listing per whitelist May 2 10:54:29.111: INFO: namespace e2e-tests-pod-network-test-zszm6 deletion completed in 24.090336473s • [SLOW TEST:50.687 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for node-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 2 10:54:29.111: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-4bf22c5d-8c63-11ea-8045-0242ac110017 STEP: Creating a pod to test consume secrets May 2 10:54:29.245: INFO: Waiting up to 5m0s for pod "pod-secrets-4bf4822c-8c63-11ea-8045-0242ac110017" in namespace "e2e-tests-secrets-fhnv6" to be "success or failure" May 2 10:54:29.254: INFO: Pod "pod-secrets-4bf4822c-8c63-11ea-8045-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 8.762297ms May 2 10:54:31.258: INFO: Pod "pod-secrets-4bf4822c-8c63-11ea-8045-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013050154s May 2 10:54:33.263: INFO: Pod "pod-secrets-4bf4822c-8c63-11ea-8045-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.017422985s STEP: Saw pod success May 2 10:54:33.263: INFO: Pod "pod-secrets-4bf4822c-8c63-11ea-8045-0242ac110017" satisfied condition "success or failure" May 2 10:54:33.266: INFO: Trying to get logs from node hunter-worker pod pod-secrets-4bf4822c-8c63-11ea-8045-0242ac110017 container secret-env-test: STEP: delete the pod May 2 10:54:33.307: INFO: Waiting for pod pod-secrets-4bf4822c-8c63-11ea-8045-0242ac110017 to disappear May 2 10:54:33.326: INFO: Pod pod-secrets-4bf4822c-8c63-11ea-8045-0242ac110017 no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 2 10:54:33.326: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-fhnv6" for this suite. May 2 10:54:39.347: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 10:54:39.360: INFO: namespace: e2e-tests-secrets-fhnv6, resource: bindings, ignored listing per whitelist May 2 10:54:39.417: INFO: namespace e2e-tests-secrets-fhnv6 deletion completed in 6.087195654s • [SLOW TEST:10.306 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:32 should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 2 10:54:39.417: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name cm-test-opt-del-52181e67-8c63-11ea-8045-0242ac110017 STEP: Creating configMap with name cm-test-opt-upd-52181ec7-8c63-11ea-8045-0242ac110017 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-52181e67-8c63-11ea-8045-0242ac110017 STEP: Updating configmap cm-test-opt-upd-52181ec7-8c63-11ea-8045-0242ac110017 STEP: Creating configMap with name cm-test-opt-create-52181eea-8c63-11ea-8045-0242ac110017 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 2 10:54:47.678: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-v8sj6" for this suite. May 2 10:55:09.714: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 10:55:09.727: INFO: namespace: e2e-tests-configmap-v8sj6, resource: bindings, ignored listing per whitelist May 2 10:55:09.790: INFO: namespace e2e-tests-configmap-v8sj6 deletion completed in 22.108683099s • [SLOW TEST:30.373 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 2 10:55:09.790: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0666 on node default medium May 2 10:55:09.921: INFO: Waiting up to 5m0s for pod "pod-64346c61-8c63-11ea-8045-0242ac110017" in namespace "e2e-tests-emptydir-r7x6p" to be "success or failure" May 2 10:55:09.937: INFO: Pod "pod-64346c61-8c63-11ea-8045-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 15.659027ms May 2 10:55:12.129: INFO: Pod "pod-64346c61-8c63-11ea-8045-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.207631696s May 2 10:55:14.133: INFO: Pod "pod-64346c61-8c63-11ea-8045-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.211851961s STEP: Saw pod success May 2 10:55:14.133: INFO: Pod "pod-64346c61-8c63-11ea-8045-0242ac110017" satisfied condition "success or failure" May 2 10:55:14.135: INFO: Trying to get logs from node hunter-worker pod pod-64346c61-8c63-11ea-8045-0242ac110017 container test-container: STEP: delete the pod May 2 10:55:14.239: INFO: Waiting for pod pod-64346c61-8c63-11ea-8045-0242ac110017 to disappear May 2 10:55:14.248: INFO: Pod pod-64346c61-8c63-11ea-8045-0242ac110017 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 2 10:55:14.249: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-r7x6p" for this suite. May 2 10:55:20.265: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 10:55:20.348: INFO: namespace: e2e-tests-emptydir-r7x6p, resource: bindings, ignored listing per whitelist May 2 10:55:20.348: INFO: namespace e2e-tests-emptydir-r7x6p deletion completed in 6.09566259s • [SLOW TEST:10.557 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-storage] Downward API volume should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 2 10:55:20.348: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin May 2 10:55:20.829: INFO: Waiting up to 5m0s for pod "downwardapi-volume-6aa15586-8c63-11ea-8045-0242ac110017" in namespace "e2e-tests-downward-api-x9xk6" to be "success or failure" May 2 10:55:20.861: INFO: Pod "downwardapi-volume-6aa15586-8c63-11ea-8045-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 31.033505ms May 2 10:55:22.975: INFO: Pod "downwardapi-volume-6aa15586-8c63-11ea-8045-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.145939107s May 2 10:55:24.979: INFO: Pod "downwardapi-volume-6aa15586-8c63-11ea-8045-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 4.149106609s May 2 10:55:27.032: INFO: Pod "downwardapi-volume-6aa15586-8c63-11ea-8045-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.202114401s STEP: Saw pod success May 2 10:55:27.032: INFO: Pod "downwardapi-volume-6aa15586-8c63-11ea-8045-0242ac110017" satisfied condition "success or failure" May 2 10:55:27.034: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-6aa15586-8c63-11ea-8045-0242ac110017 container client-container: STEP: delete the pod May 2 10:55:27.108: INFO: Waiting for pod downwardapi-volume-6aa15586-8c63-11ea-8045-0242ac110017 to disappear May 2 10:55:27.130: INFO: Pod downwardapi-volume-6aa15586-8c63-11ea-8045-0242ac110017 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 2 10:55:27.131: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-x9xk6" for this suite. May 2 10:55:33.165: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 10:55:33.223: INFO: namespace: e2e-tests-downward-api-x9xk6, resource: bindings, ignored listing per whitelist May 2 10:55:33.229: INFO: namespace e2e-tests-downward-api-x9xk6 deletion completed in 6.095219279s • [SLOW TEST:12.881 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 2 10:55:33.230: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward api env vars May 2 10:55:33.359: INFO: Waiting up to 5m0s for pod "downward-api-722d0179-8c63-11ea-8045-0242ac110017" in namespace "e2e-tests-downward-api-ncvq2" to be "success or failure" May 2 10:55:33.370: INFO: Pod "downward-api-722d0179-8c63-11ea-8045-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 10.172815ms May 2 10:55:35.374: INFO: Pod "downward-api-722d0179-8c63-11ea-8045-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014165294s May 2 10:55:37.378: INFO: Pod "downward-api-722d0179-8c63-11ea-8045-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01855361s STEP: Saw pod success May 2 10:55:37.378: INFO: Pod "downward-api-722d0179-8c63-11ea-8045-0242ac110017" satisfied condition "success or failure" May 2 10:55:37.381: INFO: Trying to get logs from node hunter-worker2 pod downward-api-722d0179-8c63-11ea-8045-0242ac110017 container dapi-container: STEP: delete the pod May 2 10:55:37.407: INFO: Waiting for pod downward-api-722d0179-8c63-11ea-8045-0242ac110017 to disappear May 2 10:55:37.411: INFO: Pod downward-api-722d0179-8c63-11ea-8045-0242ac110017 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 2 10:55:37.411: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-ncvq2" for this suite. May 2 10:55:43.442: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 10:55:43.460: INFO: namespace: e2e-tests-downward-api-ncvq2, resource: bindings, ignored listing per whitelist May 2 10:55:43.518: INFO: namespace e2e-tests-downward-api-ncvq2 deletion completed in 6.104035255s • [SLOW TEST:10.288 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38 should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 2 10:55:43.518: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-f8dl8 STEP: creating a selector STEP: Creating the service pods in kubernetes May 2 10:55:43.623: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods May 2 10:56:13.752: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.145:8080/dial?request=hostName&protocol=udp&host=10.244.2.152&port=8081&tries=1'] Namespace:e2e-tests-pod-network-test-f8dl8 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 2 10:56:13.752: INFO: >>> kubeConfig: /root/.kube/config I0502 10:56:13.780273 6 log.go:172] (0xc0012fa2c0) (0xc001967cc0) Create stream I0502 10:56:13.780302 6 log.go:172] (0xc0012fa2c0) (0xc001967cc0) Stream added, broadcasting: 1 I0502 10:56:13.782953 6 log.go:172] (0xc0012fa2c0) Reply frame received for 1 I0502 10:56:13.782982 6 log.go:172] (0xc0012fa2c0) (0xc0013623c0) Create stream I0502 10:56:13.782993 6 log.go:172] (0xc0012fa2c0) (0xc0013623c0) Stream added, broadcasting: 3 I0502 10:56:13.783945 6 log.go:172] (0xc0012fa2c0) Reply frame received for 3 I0502 10:56:13.783967 6 log.go:172] (0xc0012fa2c0) (0xc001967d60) Create stream I0502 10:56:13.783973 6 log.go:172] (0xc0012fa2c0) (0xc001967d60) Stream added, broadcasting: 5 I0502 10:56:13.785552 6 log.go:172] (0xc0012fa2c0) Reply frame received for 5 I0502 10:56:13.848992 6 log.go:172] (0xc0012fa2c0) Data frame received for 3 I0502 10:56:13.849021 6 log.go:172] (0xc0013623c0) (3) Data frame handling I0502 10:56:13.849039 6 log.go:172] (0xc0013623c0) (3) Data frame sent I0502 10:56:13.849435 6 log.go:172] (0xc0012fa2c0) Data frame received for 5 I0502 10:56:13.849461 6 log.go:172] (0xc001967d60) (5) Data frame handling I0502 10:56:13.849579 6 log.go:172] (0xc0012fa2c0) Data frame received for 3 I0502 10:56:13.849589 6 log.go:172] (0xc0013623c0) (3) Data frame handling I0502 10:56:13.851256 6 log.go:172] (0xc0012fa2c0) Data frame received for 1 I0502 10:56:13.851278 6 log.go:172] (0xc001967cc0) (1) Data frame handling I0502 10:56:13.851286 6 log.go:172] (0xc001967cc0) (1) Data frame sent I0502 10:56:13.851305 6 log.go:172] (0xc0012fa2c0) (0xc001967cc0) Stream removed, broadcasting: 1 I0502 10:56:13.851315 6 log.go:172] (0xc0012fa2c0) Go away received I0502 10:56:13.851431 6 log.go:172] (0xc0012fa2c0) (0xc001967cc0) Stream removed, broadcasting: 1 I0502 10:56:13.851454 6 log.go:172] (0xc0012fa2c0) (0xc0013623c0) Stream removed, broadcasting: 3 I0502 10:56:13.851461 6 log.go:172] (0xc0012fa2c0) (0xc001967d60) Stream removed, broadcasting: 5 May 2 10:56:13.851: INFO: Waiting for endpoints: map[] May 2 10:56:13.854: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.145:8080/dial?request=hostName&protocol=udp&host=10.244.1.144&port=8081&tries=1'] Namespace:e2e-tests-pod-network-test-f8dl8 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 2 10:56:13.854: INFO: >>> kubeConfig: /root/.kube/config I0502 10:56:13.881444 6 log.go:172] (0xc001d482c0) (0xc001d00aa0) Create stream I0502 10:56:13.881475 6 log.go:172] (0xc001d482c0) (0xc001d00aa0) Stream added, broadcasting: 1 I0502 10:56:13.883317 6 log.go:172] (0xc001d482c0) Reply frame received for 1 I0502 10:56:13.883351 6 log.go:172] (0xc001d482c0) (0xc001d00be0) Create stream I0502 10:56:13.883361 6 log.go:172] (0xc001d482c0) (0xc001d00be0) Stream added, broadcasting: 3 I0502 10:56:13.884283 6 log.go:172] (0xc001d482c0) Reply frame received for 3 I0502 10:56:13.884322 6 log.go:172] (0xc001d482c0) (0xc001d00c80) Create stream I0502 10:56:13.884344 6 log.go:172] (0xc001d482c0) (0xc001d00c80) Stream added, broadcasting: 5 I0502 10:56:13.885535 6 log.go:172] (0xc001d482c0) Reply frame received for 5 I0502 10:56:13.954345 6 log.go:172] (0xc001d482c0) Data frame received for 3 I0502 10:56:13.954374 6 log.go:172] (0xc001d00be0) (3) Data frame handling I0502 10:56:13.954392 6 log.go:172] (0xc001d00be0) (3) Data frame sent I0502 10:56:13.954767 6 log.go:172] (0xc001d482c0) Data frame received for 5 I0502 10:56:13.954783 6 log.go:172] (0xc001d00c80) (5) Data frame handling I0502 10:56:13.955018 6 log.go:172] (0xc001d482c0) Data frame received for 3 I0502 10:56:13.955035 6 log.go:172] (0xc001d00be0) (3) Data frame handling I0502 10:56:13.956755 6 log.go:172] (0xc001d482c0) Data frame received for 1 I0502 10:56:13.956772 6 log.go:172] (0xc001d00aa0) (1) Data frame handling I0502 10:56:13.956784 6 log.go:172] (0xc001d00aa0) (1) Data frame sent I0502 10:56:13.956803 6 log.go:172] (0xc001d482c0) (0xc001d00aa0) Stream removed, broadcasting: 1 I0502 10:56:13.956898 6 log.go:172] (0xc001d482c0) (0xc001d00aa0) Stream removed, broadcasting: 1 I0502 10:56:13.956918 6 log.go:172] (0xc001d482c0) (0xc001d00be0) Stream removed, broadcasting: 3 I0502 10:56:13.956989 6 log.go:172] (0xc001d482c0) Go away received I0502 10:56:13.957042 6 log.go:172] (0xc001d482c0) (0xc001d00c80) Stream removed, broadcasting: 5 May 2 10:56:13.957: INFO: Waiting for endpoints: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 2 10:56:13.957: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pod-network-test-f8dl8" for this suite. May 2 10:56:37.978: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 10:56:38.005: INFO: namespace: e2e-tests-pod-network-test-f8dl8, resource: bindings, ignored listing per whitelist May 2 10:56:38.057: INFO: namespace e2e-tests-pod-network-test-f8dl8 deletion completed in 24.096282738s • [SLOW TEST:54.539 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 2 10:56:38.057: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating secret e2e-tests-secrets-nxh5p/secret-test-98ca9fc6-8c63-11ea-8045-0242ac110017 STEP: Creating a pod to test consume secrets May 2 10:56:38.165: INFO: Waiting up to 5m0s for pod "pod-configmaps-98cd0c30-8c63-11ea-8045-0242ac110017" in namespace "e2e-tests-secrets-nxh5p" to be "success or failure" May 2 10:56:38.168: INFO: Pod "pod-configmaps-98cd0c30-8c63-11ea-8045-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.882096ms May 2 10:56:40.171: INFO: Pod "pod-configmaps-98cd0c30-8c63-11ea-8045-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006209532s May 2 10:56:42.175: INFO: Pod "pod-configmaps-98cd0c30-8c63-11ea-8045-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009596075s STEP: Saw pod success May 2 10:56:42.175: INFO: Pod "pod-configmaps-98cd0c30-8c63-11ea-8045-0242ac110017" satisfied condition "success or failure" May 2 10:56:42.177: INFO: Trying to get logs from node hunter-worker2 pod pod-configmaps-98cd0c30-8c63-11ea-8045-0242ac110017 container env-test: STEP: delete the pod May 2 10:56:42.229: INFO: Waiting for pod pod-configmaps-98cd0c30-8c63-11ea-8045-0242ac110017 to disappear May 2 10:56:42.238: INFO: Pod pod-configmaps-98cd0c30-8c63-11ea-8045-0242ac110017 no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 2 10:56:42.239: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-nxh5p" for this suite. May 2 10:56:48.254: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 10:56:48.285: INFO: namespace: e2e-tests-secrets-nxh5p, resource: bindings, ignored listing per whitelist May 2 10:56:48.326: INFO: namespace e2e-tests-secrets-nxh5p deletion completed in 6.085161091s • [SLOW TEST:10.269 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:32 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 2 10:56:48.327: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating the pod May 2 10:56:53.329: INFO: Successfully updated pod "labelsupdate9f0a758e-8c63-11ea-8045-0242ac110017" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 2 10:56:57.385: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-785kw" for this suite. May 2 10:57:19.527: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 10:57:19.552: INFO: namespace: e2e-tests-downward-api-785kw, resource: bindings, ignored listing per whitelist May 2 10:57:19.598: INFO: namespace e2e-tests-downward-api-785kw deletion completed in 22.208595108s • [SLOW TEST:31.271 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 2 10:57:19.598: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0777 on tmpfs May 2 10:57:19.751: INFO: Waiting up to 5m0s for pod "pod-b1955aee-8c63-11ea-8045-0242ac110017" in namespace "e2e-tests-emptydir-5kxhr" to be "success or failure" May 2 10:57:19.761: INFO: Pod "pod-b1955aee-8c63-11ea-8045-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 10.351558ms May 2 10:57:21.765: INFO: Pod "pod-b1955aee-8c63-11ea-8045-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013907842s May 2 10:57:23.768: INFO: Pod "pod-b1955aee-8c63-11ea-8045-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.016930714s STEP: Saw pod success May 2 10:57:23.768: INFO: Pod "pod-b1955aee-8c63-11ea-8045-0242ac110017" satisfied condition "success or failure" May 2 10:57:23.770: INFO: Trying to get logs from node hunter-worker pod pod-b1955aee-8c63-11ea-8045-0242ac110017 container test-container: STEP: delete the pod May 2 10:57:23.822: INFO: Waiting for pod pod-b1955aee-8c63-11ea-8045-0242ac110017 to disappear May 2 10:57:23.845: INFO: Pod pod-b1955aee-8c63-11ea-8045-0242ac110017 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 2 10:57:23.845: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-5kxhr" for this suite. May 2 10:57:29.866: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 10:57:29.903: INFO: namespace: e2e-tests-emptydir-5kxhr, resource: bindings, ignored listing per whitelist May 2 10:57:29.946: INFO: namespace e2e-tests-emptydir-5kxhr deletion completed in 6.097820215s • [SLOW TEST:10.348 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 2 10:57:29.946: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test override arguments May 2 10:57:30.092: INFO: Waiting up to 5m0s for pod "client-containers-b7bc6207-8c63-11ea-8045-0242ac110017" in namespace "e2e-tests-containers-mhrtw" to be "success or failure" May 2 10:57:30.097: INFO: Pod "client-containers-b7bc6207-8c63-11ea-8045-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 4.834037ms May 2 10:57:32.101: INFO: Pod "client-containers-b7bc6207-8c63-11ea-8045-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008726949s May 2 10:57:34.366: INFO: Pod "client-containers-b7bc6207-8c63-11ea-8045-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 4.274136435s May 2 10:57:36.371: INFO: Pod "client-containers-b7bc6207-8c63-11ea-8045-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.278233527s STEP: Saw pod success May 2 10:57:36.371: INFO: Pod "client-containers-b7bc6207-8c63-11ea-8045-0242ac110017" satisfied condition "success or failure" May 2 10:57:36.374: INFO: Trying to get logs from node hunter-worker2 pod client-containers-b7bc6207-8c63-11ea-8045-0242ac110017 container test-container: STEP: delete the pod May 2 10:57:36.400: INFO: Waiting for pod client-containers-b7bc6207-8c63-11ea-8045-0242ac110017 to disappear May 2 10:57:36.403: INFO: Pod client-containers-b7bc6207-8c63-11ea-8045-0242ac110017 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 2 10:57:36.403: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-containers-mhrtw" for this suite. May 2 10:57:42.419: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 10:57:42.462: INFO: namespace: e2e-tests-containers-mhrtw, resource: bindings, ignored listing per whitelist May 2 10:57:42.493: INFO: namespace e2e-tests-containers-mhrtw deletion completed in 6.085850182s • [SLOW TEST:12.547 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 2 10:57:42.493: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102 [It] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 2 10:57:42.655: INFO: Requires at least 2 nodes (not -1) [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68 May 2 10:57:42.664: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-kp6fp/daemonsets","resourceVersion":"8333835"},"items":null} May 2 10:57:42.667: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-kp6fp/pods","resourceVersion":"8333835"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 2 10:57:42.676: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-daemonsets-kp6fp" for this suite. May 2 10:57:48.795: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 10:57:48.855: INFO: namespace: e2e-tests-daemonsets-kp6fp, resource: bindings, ignored listing per whitelist May 2 10:57:48.872: INFO: namespace e2e-tests-daemonsets-kp6fp deletion completed in 6.193007206s S [SKIPPING] [6.379 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should rollback without unnecessary restarts [Conformance] [It] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 2 10:57:42.655: Requires at least 2 nodes (not -1) /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:292 ------------------------------ SSSSSSSS ------------------------------ [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 2 10:57:48.873: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65 [It] RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 2 10:57:50.822: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted) May 2 10:57:51.055: INFO: Pod name sample-pod: Found 0 pods out of 1 May 2 10:57:56.060: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running May 2 10:57:56.060: INFO: Creating deployment "test-rolling-update-deployment" May 2 10:57:56.064: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has May 2 10:57:56.076: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created May 2 10:57:58.139: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected May 2 10:57:58.143: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724013876, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724013876, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724013876, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724013876, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)} May 2 10:58:00.147: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724013876, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724013876, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724013876, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724013876, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)} May 2 10:58:02.253: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724013876, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724013876, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724013876, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724013876, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)} May 2 10:58:04.147: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted) [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59 May 2 10:58:04.157: INFO: Deployment "test-rolling-update-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment,GenerateName:,Namespace:e2e-tests-deployment-bjd8v,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-bjd8v/deployments/test-rolling-update-deployment,UID:c73e30ce-8c63-11ea-99e8-0242ac110002,ResourceVersion:8333921,Generation:1,CreationTimestamp:2020-05-02 10:57:56 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-05-02 10:57:56 +0000 UTC 2020-05-02 10:57:56 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-05-02 10:58:02 +0000 UTC 2020-05-02 10:57:56 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rolling-update-deployment-75db98fb4c" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},} May 2 10:58:04.160: INFO: New ReplicaSet "test-rolling-update-deployment-75db98fb4c" of Deployment "test-rolling-update-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-75db98fb4c,GenerateName:,Namespace:e2e-tests-deployment-bjd8v,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-bjd8v/replicasets/test-rolling-update-deployment-75db98fb4c,UID:c7415c99-8c63-11ea-99e8-0242ac110002,ResourceVersion:8333912,Generation:1,CreationTimestamp:2020-05-02 10:57:56 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment c73e30ce-8c63-11ea-99e8-0242ac110002 0xc0019bf747 0xc0019bf748}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} May 2 10:58:04.160: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment": May 2 10:58:04.160: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-controller,GenerateName:,Namespace:e2e-tests-deployment-bjd8v,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-bjd8v/replicasets/test-rolling-update-controller,UID:c41efbe1-8c63-11ea-99e8-0242ac110002,ResourceVersion:8333920,Generation:2,CreationTimestamp:2020-05-02 10:57:50 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305832,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment c73e30ce-8c63-11ea-99e8-0242ac110002 0xc0019bf56f 0xc0019bf5c0}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} May 2 10:58:04.164: INFO: Pod "test-rolling-update-deployment-75db98fb4c-dds9x" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-75db98fb4c-dds9x,GenerateName:test-rolling-update-deployment-75db98fb4c-,Namespace:e2e-tests-deployment-bjd8v,SelfLink:/api/v1/namespaces/e2e-tests-deployment-bjd8v/pods/test-rolling-update-deployment-75db98fb4c-dds9x,UID:c74df02c-8c63-11ea-99e8-0242ac110002,ResourceVersion:8333911,Generation:0,CreationTimestamp:2020-05-02 10:57:56 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rolling-update-deployment-75db98fb4c c7415c99-8c63-11ea-99e8-0242ac110002 0xc0019942f7 0xc0019942f8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-qpr9b {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-qpr9b,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-qpr9b true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001994370} {node.kubernetes.io/unreachable Exists NoExecute 0xc001994390}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-02 10:57:56 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-02 10:58:01 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-02 10:58:01 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-02 10:57:56 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.4,PodIP:10.244.2.155,StartTime:2020-05-02 10:57:56 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-05-02 10:58:00 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 containerd://0c7fc35b91c960083e27d411ddac898268a14ce38186d1bf8b56c20357f2a36e}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 2 10:58:04.164: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-deployment-bjd8v" for this suite. May 2 10:58:12.220: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 10:58:12.237: INFO: namespace: e2e-tests-deployment-bjd8v, resource: bindings, ignored listing per whitelist May 2 10:58:12.295: INFO: namespace e2e-tests-deployment-bjd8v deletion completed in 8.127651643s • [SLOW TEST:23.423 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl expose should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 2 10:58:12.296: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating Redis RC May 2 10:58:12.409: INFO: namespace e2e-tests-kubectl-62shm May 2 10:58:12.409: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-62shm' May 2 10:58:15.436: INFO: stderr: "" May 2 10:58:15.436: INFO: stdout: "replicationcontroller/redis-master created\n" STEP: Waiting for Redis master to start. May 2 10:58:16.441: INFO: Selector matched 1 pods for map[app:redis] May 2 10:58:16.441: INFO: Found 0 / 1 May 2 10:58:17.613: INFO: Selector matched 1 pods for map[app:redis] May 2 10:58:17.613: INFO: Found 0 / 1 May 2 10:58:18.441: INFO: Selector matched 1 pods for map[app:redis] May 2 10:58:18.441: INFO: Found 0 / 1 May 2 10:58:19.450: INFO: Selector matched 1 pods for map[app:redis] May 2 10:58:19.450: INFO: Found 1 / 1 May 2 10:58:19.450: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 May 2 10:58:19.454: INFO: Selector matched 1 pods for map[app:redis] May 2 10:58:19.454: INFO: ForEach: Found 1 pods from the filter. Now looping through them. May 2 10:58:19.454: INFO: wait on redis-master startup in e2e-tests-kubectl-62shm May 2 10:58:19.454: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-z2vh4 redis-master --namespace=e2e-tests-kubectl-62shm' May 2 10:58:19.576: INFO: stderr: "" May 2 10:58:19.576: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 02 May 10:58:18.474 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 02 May 10:58:18.474 # Server started, Redis version 3.2.12\n1:M 02 May 10:58:18.474 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 02 May 10:58:18.474 * The server is now ready to accept connections on port 6379\n" STEP: exposing RC May 2 10:58:19.577: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose rc redis-master --name=rm2 --port=1234 --target-port=6379 --namespace=e2e-tests-kubectl-62shm' May 2 10:58:19.731: INFO: stderr: "" May 2 10:58:19.731: INFO: stdout: "service/rm2 exposed\n" May 2 10:58:19.740: INFO: Service rm2 in namespace e2e-tests-kubectl-62shm found. STEP: exposing service May 2 10:58:21.748: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=e2e-tests-kubectl-62shm' May 2 10:58:21.906: INFO: stderr: "" May 2 10:58:21.906: INFO: stdout: "service/rm3 exposed\n" May 2 10:58:21.930: INFO: Service rm3 in namespace e2e-tests-kubectl-62shm found. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 2 10:58:23.936: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-62shm" for this suite. May 2 10:58:47.953: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 10:58:48.044: INFO: namespace: e2e-tests-kubectl-62shm, resource: bindings, ignored listing per whitelist May 2 10:58:48.079: INFO: namespace e2e-tests-kubectl-62shm deletion completed in 24.139984852s • [SLOW TEST:35.784 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl expose /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 2 10:58:48.079: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod May 2 10:58:52.764: INFO: Successfully updated pod "pod-update-activedeadlineseconds-e6526b6b-8c63-11ea-8045-0242ac110017" May 2 10:58:52.764: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-e6526b6b-8c63-11ea-8045-0242ac110017" in namespace "e2e-tests-pods-dsstg" to be "terminated due to deadline exceeded" May 2 10:58:52.770: INFO: Pod "pod-update-activedeadlineseconds-e6526b6b-8c63-11ea-8045-0242ac110017": Phase="Running", Reason="", readiness=true. Elapsed: 6.149044ms May 2 10:58:54.775: INFO: Pod "pod-update-activedeadlineseconds-e6526b6b-8c63-11ea-8045-0242ac110017": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.010982374s May 2 10:58:54.775: INFO: Pod "pod-update-activedeadlineseconds-e6526b6b-8c63-11ea-8045-0242ac110017" satisfied condition "terminated due to deadline exceeded" [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 2 10:58:54.775: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-dsstg" for this suite. May 2 10:59:02.798: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 10:59:02.815: INFO: namespace: e2e-tests-pods-dsstg, resource: bindings, ignored listing per whitelist May 2 10:59:02.871: INFO: namespace e2e-tests-pods-dsstg deletion completed in 8.092571651s • [SLOW TEST:14.792 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 2 10:59:02.871: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test override all May 2 10:59:03.049: INFO: Waiting up to 5m0s for pod "client-containers-ef275972-8c63-11ea-8045-0242ac110017" in namespace "e2e-tests-containers-g4x9s" to be "success or failure" May 2 10:59:03.054: INFO: Pod "client-containers-ef275972-8c63-11ea-8045-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 4.638173ms May 2 10:59:05.066: INFO: Pod "client-containers-ef275972-8c63-11ea-8045-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016782111s May 2 10:59:07.079: INFO: Pod "client-containers-ef275972-8c63-11ea-8045-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 4.029582846s May 2 10:59:09.083: INFO: Pod "client-containers-ef275972-8c63-11ea-8045-0242ac110017": Phase="Running", Reason="", readiness=true. Elapsed: 6.033898384s May 2 10:59:11.088: INFO: Pod "client-containers-ef275972-8c63-11ea-8045-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.038702791s STEP: Saw pod success May 2 10:59:11.088: INFO: Pod "client-containers-ef275972-8c63-11ea-8045-0242ac110017" satisfied condition "success or failure" May 2 10:59:11.091: INFO: Trying to get logs from node hunter-worker2 pod client-containers-ef275972-8c63-11ea-8045-0242ac110017 container test-container: STEP: delete the pod May 2 10:59:11.130: INFO: Waiting for pod client-containers-ef275972-8c63-11ea-8045-0242ac110017 to disappear May 2 10:59:11.138: INFO: Pod client-containers-ef275972-8c63-11ea-8045-0242ac110017 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 2 10:59:11.138: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-containers-g4x9s" for this suite. May 2 10:59:17.153: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 10:59:17.168: INFO: namespace: e2e-tests-containers-g4x9s, resource: bindings, ignored listing per whitelist May 2 10:59:17.226: INFO: namespace e2e-tests-containers-g4x9s deletion completed in 6.0853479s • [SLOW TEST:14.354 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl patch should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 2 10:59:17.226: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating Redis RC May 2 10:59:17.399: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-j8q99' May 2 10:59:17.871: INFO: stderr: "" May 2 10:59:17.871: INFO: stdout: "replicationcontroller/redis-master created\n" STEP: Waiting for Redis master to start. May 2 10:59:19.268: INFO: Selector matched 1 pods for map[app:redis] May 2 10:59:19.268: INFO: Found 0 / 1 May 2 10:59:19.875: INFO: Selector matched 1 pods for map[app:redis] May 2 10:59:19.875: INFO: Found 0 / 1 May 2 10:59:20.875: INFO: Selector matched 1 pods for map[app:redis] May 2 10:59:20.875: INFO: Found 0 / 1 May 2 10:59:21.875: INFO: Selector matched 1 pods for map[app:redis] May 2 10:59:21.875: INFO: Found 0 / 1 May 2 10:59:22.876: INFO: Selector matched 1 pods for map[app:redis] May 2 10:59:22.876: INFO: Found 1 / 1 May 2 10:59:22.876: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 STEP: patching all pods May 2 10:59:22.880: INFO: Selector matched 1 pods for map[app:redis] May 2 10:59:22.880: INFO: ForEach: Found 1 pods from the filter. Now looping through them. May 2 10:59:22.880: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config patch pod redis-master-5v74m --namespace=e2e-tests-kubectl-j8q99 -p {"metadata":{"annotations":{"x":"y"}}}' May 2 10:59:23.041: INFO: stderr: "" May 2 10:59:23.041: INFO: stdout: "pod/redis-master-5v74m patched\n" STEP: checking annotations May 2 10:59:23.046: INFO: Selector matched 1 pods for map[app:redis] May 2 10:59:23.046: INFO: ForEach: Found 1 pods from the filter. Now looping through them. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 2 10:59:23.046: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-j8q99" for this suite. May 2 10:59:45.074: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 10:59:45.103: INFO: namespace: e2e-tests-kubectl-j8q99, resource: bindings, ignored listing per whitelist May 2 10:59:45.151: INFO: namespace e2e-tests-kubectl-j8q99 deletion completed in 22.10159987s • [SLOW TEST:27.925 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl patch /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] Projected downwardAPI should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 2 10:59:45.151: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin May 2 10:59:45.267: INFO: Waiting up to 5m0s for pod "downwardapi-volume-08534647-8c64-11ea-8045-0242ac110017" in namespace "e2e-tests-projected-d9fs4" to be "success or failure" May 2 10:59:45.302: INFO: Pod "downwardapi-volume-08534647-8c64-11ea-8045-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 34.317949ms May 2 10:59:47.306: INFO: Pod "downwardapi-volume-08534647-8c64-11ea-8045-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.038236532s May 2 10:59:49.310: INFO: Pod "downwardapi-volume-08534647-8c64-11ea-8045-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.042995788s STEP: Saw pod success May 2 10:59:49.311: INFO: Pod "downwardapi-volume-08534647-8c64-11ea-8045-0242ac110017" satisfied condition "success or failure" May 2 10:59:49.363: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-08534647-8c64-11ea-8045-0242ac110017 container client-container: STEP: delete the pod May 2 10:59:49.392: INFO: Waiting for pod downwardapi-volume-08534647-8c64-11ea-8045-0242ac110017 to disappear May 2 10:59:49.421: INFO: Pod downwardapi-volume-08534647-8c64-11ea-8045-0242ac110017 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 2 10:59:49.421: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-d9fs4" for this suite. May 2 10:59:55.701: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 10:59:56.219: INFO: namespace: e2e-tests-projected-d9fs4, resource: bindings, ignored listing per whitelist May 2 10:59:56.263: INFO: namespace e2e-tests-projected-d9fs4 deletion completed in 6.839179917s • [SLOW TEST:11.112 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 2 10:59:56.263: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43 [It] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod May 2 10:59:56.504: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 2 11:00:04.444: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-init-container-rv7fg" for this suite. May 2 11:00:12.466: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 11:00:12.523: INFO: namespace: e2e-tests-init-container-rv7fg, resource: bindings, ignored listing per whitelist May 2 11:00:12.536: INFO: namespace e2e-tests-init-container-rv7fg deletion completed in 8.07919921s • [SLOW TEST:16.273 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 2 11:00:12.537: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:295 [It] should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a replication controller May 2 11:00:12.862: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-t7wtz' May 2 11:00:13.117: INFO: stderr: "" May 2 11:00:13.117: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. May 2 11:00:13.117: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-t7wtz' May 2 11:00:13.380: INFO: stderr: "" May 2 11:00:13.380: INFO: stdout: "update-demo-nautilus-6p5zc update-demo-nautilus-pchp7 " May 2 11:00:13.380: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6p5zc -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-t7wtz' May 2 11:00:13.747: INFO: stderr: "" May 2 11:00:13.747: INFO: stdout: "" May 2 11:00:13.747: INFO: update-demo-nautilus-6p5zc is created but not running May 2 11:00:18.747: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-t7wtz' May 2 11:00:19.045: INFO: stderr: "" May 2 11:00:19.045: INFO: stdout: "update-demo-nautilus-6p5zc update-demo-nautilus-pchp7 " May 2 11:00:19.045: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6p5zc -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-t7wtz' May 2 11:00:19.177: INFO: stderr: "" May 2 11:00:19.177: INFO: stdout: "true" May 2 11:00:19.177: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6p5zc -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-t7wtz' May 2 11:00:19.269: INFO: stderr: "" May 2 11:00:19.269: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 2 11:00:19.269: INFO: validating pod update-demo-nautilus-6p5zc May 2 11:00:19.273: INFO: got data: { "image": "nautilus.jpg" } May 2 11:00:19.273: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 2 11:00:19.273: INFO: update-demo-nautilus-6p5zc is verified up and running May 2 11:00:19.273: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-pchp7 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-t7wtz' May 2 11:00:19.375: INFO: stderr: "" May 2 11:00:19.375: INFO: stdout: "true" May 2 11:00:19.375: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-pchp7 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-t7wtz' May 2 11:00:19.464: INFO: stderr: "" May 2 11:00:19.464: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 2 11:00:19.464: INFO: validating pod update-demo-nautilus-pchp7 May 2 11:00:19.467: INFO: got data: { "image": "nautilus.jpg" } May 2 11:00:19.467: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 2 11:00:19.467: INFO: update-demo-nautilus-pchp7 is verified up and running STEP: scaling down the replication controller May 2 11:00:19.469: INFO: scanned /root for discovery docs: May 2 11:00:19.469: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=e2e-tests-kubectl-t7wtz' May 2 11:00:20.639: INFO: stderr: "" May 2 11:00:20.639: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. May 2 11:00:20.640: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-t7wtz' May 2 11:00:20.741: INFO: stderr: "" May 2 11:00:20.741: INFO: stdout: "update-demo-nautilus-6p5zc update-demo-nautilus-pchp7 " STEP: Replicas for name=update-demo: expected=1 actual=2 May 2 11:00:25.741: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-t7wtz' May 2 11:00:25.854: INFO: stderr: "" May 2 11:00:25.854: INFO: stdout: "update-demo-nautilus-6p5zc update-demo-nautilus-pchp7 " STEP: Replicas for name=update-demo: expected=1 actual=2 May 2 11:00:30.854: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-t7wtz' May 2 11:00:30.965: INFO: stderr: "" May 2 11:00:30.965: INFO: stdout: "update-demo-nautilus-6p5zc update-demo-nautilus-pchp7 " STEP: Replicas for name=update-demo: expected=1 actual=2 May 2 11:00:35.966: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-t7wtz' May 2 11:00:36.069: INFO: stderr: "" May 2 11:00:36.069: INFO: stdout: "update-demo-nautilus-pchp7 " May 2 11:00:36.069: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-pchp7 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-t7wtz' May 2 11:00:36.305: INFO: stderr: "" May 2 11:00:36.305: INFO: stdout: "true" May 2 11:00:36.305: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-pchp7 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-t7wtz' May 2 11:00:36.414: INFO: stderr: "" May 2 11:00:36.414: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 2 11:00:36.414: INFO: validating pod update-demo-nautilus-pchp7 May 2 11:00:36.417: INFO: got data: { "image": "nautilus.jpg" } May 2 11:00:36.417: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 2 11:00:36.417: INFO: update-demo-nautilus-pchp7 is verified up and running STEP: scaling up the replication controller May 2 11:00:36.419: INFO: scanned /root for discovery docs: May 2 11:00:36.419: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=e2e-tests-kubectl-t7wtz' May 2 11:00:37.574: INFO: stderr: "" May 2 11:00:37.574: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. May 2 11:00:37.574: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-t7wtz' May 2 11:00:37.670: INFO: stderr: "" May 2 11:00:37.670: INFO: stdout: "update-demo-nautilus-mgr2f update-demo-nautilus-pchp7 " May 2 11:00:37.670: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-mgr2f -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-t7wtz' May 2 11:00:37.778: INFO: stderr: "" May 2 11:00:37.778: INFO: stdout: "" May 2 11:00:37.778: INFO: update-demo-nautilus-mgr2f is created but not running May 2 11:00:42.778: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-t7wtz' May 2 11:00:42.890: INFO: stderr: "" May 2 11:00:42.890: INFO: stdout: "update-demo-nautilus-mgr2f update-demo-nautilus-pchp7 " May 2 11:00:42.890: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-mgr2f -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-t7wtz' May 2 11:00:42.992: INFO: stderr: "" May 2 11:00:42.992: INFO: stdout: "true" May 2 11:00:42.992: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-mgr2f -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-t7wtz' May 2 11:00:43.080: INFO: stderr: "" May 2 11:00:43.080: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 2 11:00:43.080: INFO: validating pod update-demo-nautilus-mgr2f May 2 11:00:43.210: INFO: got data: { "image": "nautilus.jpg" } May 2 11:00:43.210: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 2 11:00:43.210: INFO: update-demo-nautilus-mgr2f is verified up and running May 2 11:00:43.210: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-pchp7 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-t7wtz' May 2 11:00:43.309: INFO: stderr: "" May 2 11:00:43.309: INFO: stdout: "true" May 2 11:00:43.309: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-pchp7 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-t7wtz' May 2 11:00:43.419: INFO: stderr: "" May 2 11:00:43.419: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 2 11:00:43.419: INFO: validating pod update-demo-nautilus-pchp7 May 2 11:00:43.422: INFO: got data: { "image": "nautilus.jpg" } May 2 11:00:43.422: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 2 11:00:43.422: INFO: update-demo-nautilus-pchp7 is verified up and running STEP: using delete to clean up resources May 2 11:00:43.422: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-t7wtz' May 2 11:00:43.580: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 2 11:00:43.580: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" May 2 11:00:43.580: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=e2e-tests-kubectl-t7wtz' May 2 11:00:43.682: INFO: stderr: "No resources found.\n" May 2 11:00:43.682: INFO: stdout: "" May 2 11:00:43.682: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=e2e-tests-kubectl-t7wtz -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 2 11:00:43.851: INFO: stderr: "" May 2 11:00:43.851: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 2 11:00:43.851: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-t7wtz" for this suite. May 2 11:01:06.030: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 11:01:06.054: INFO: namespace: e2e-tests-kubectl-t7wtz, resource: bindings, ignored listing per whitelist May 2 11:01:06.100: INFO: namespace e2e-tests-kubectl-t7wtz deletion completed in 22.24618776s • [SLOW TEST:53.563 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 2 11:01:06.100: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name s-test-opt-del-389e0209-8c64-11ea-8045-0242ac110017 STEP: Creating secret with name s-test-opt-upd-389e0292-8c64-11ea-8045-0242ac110017 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-389e0209-8c64-11ea-8045-0242ac110017 STEP: Updating secret s-test-opt-upd-389e0292-8c64-11ea-8045-0242ac110017 STEP: Creating secret with name s-test-opt-create-389e02c5-8c64-11ea-8045-0242ac110017 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 2 11:02:39.577: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-5j94w" for this suite. May 2 11:03:03.716: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 11:03:03.724: INFO: namespace: e2e-tests-projected-5j94w, resource: bindings, ignored listing per whitelist May 2 11:03:03.788: INFO: namespace e2e-tests-projected-5j94w deletion completed in 24.207677496s • [SLOW TEST:117.688 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 2 11:03:03.788: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102 [It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 2 11:03:03.948: INFO: Creating simple daemon set daemon-set STEP: Check that daemon pods launch on every node of the cluster. May 2 11:03:03.959: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 2 11:03:03.961: INFO: Number of nodes with available pods: 0 May 2 11:03:03.961: INFO: Node hunter-worker is running more than one daemon pod May 2 11:03:04.966: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 2 11:03:04.969: INFO: Number of nodes with available pods: 0 May 2 11:03:04.969: INFO: Node hunter-worker is running more than one daemon pod May 2 11:03:06.096: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 2 11:03:06.099: INFO: Number of nodes with available pods: 0 May 2 11:03:06.099: INFO: Node hunter-worker is running more than one daemon pod May 2 11:03:07.031: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 2 11:03:07.094: INFO: Number of nodes with available pods: 0 May 2 11:03:07.094: INFO: Node hunter-worker is running more than one daemon pod May 2 11:03:07.966: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 2 11:03:07.968: INFO: Number of nodes with available pods: 2 May 2 11:03:07.968: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Update daemon pods image. STEP: Check that daemon pods images are updated. May 2 11:03:07.998: INFO: Wrong image for pod: daemon-set-glprd. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 2 11:03:07.998: INFO: Wrong image for pod: daemon-set-q799q. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 2 11:03:08.016: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 2 11:03:09.020: INFO: Wrong image for pod: daemon-set-glprd. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 2 11:03:09.021: INFO: Wrong image for pod: daemon-set-q799q. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 2 11:03:09.025: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 2 11:03:10.021: INFO: Wrong image for pod: daemon-set-glprd. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 2 11:03:10.021: INFO: Wrong image for pod: daemon-set-q799q. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 2 11:03:10.026: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 2 11:03:11.021: INFO: Wrong image for pod: daemon-set-glprd. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 2 11:03:11.021: INFO: Wrong image for pod: daemon-set-q799q. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 2 11:03:11.021: INFO: Pod daemon-set-q799q is not available May 2 11:03:11.043: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 2 11:03:12.020: INFO: Wrong image for pod: daemon-set-glprd. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 2 11:03:12.020: INFO: Pod daemon-set-rmwfk is not available May 2 11:03:12.024: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 2 11:03:13.020: INFO: Wrong image for pod: daemon-set-glprd. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 2 11:03:13.020: INFO: Pod daemon-set-rmwfk is not available May 2 11:03:13.025: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 2 11:03:14.020: INFO: Wrong image for pod: daemon-set-glprd. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 2 11:03:14.020: INFO: Pod daemon-set-rmwfk is not available May 2 11:03:14.025: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 2 11:03:15.020: INFO: Wrong image for pod: daemon-set-glprd. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 2 11:03:15.020: INFO: Pod daemon-set-rmwfk is not available May 2 11:03:15.024: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 2 11:03:16.020: INFO: Wrong image for pod: daemon-set-glprd. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 2 11:03:16.024: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 2 11:03:17.021: INFO: Wrong image for pod: daemon-set-glprd. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 2 11:03:17.025: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 2 11:03:18.020: INFO: Wrong image for pod: daemon-set-glprd. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 2 11:03:18.020: INFO: Pod daemon-set-glprd is not available May 2 11:03:18.023: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 2 11:03:19.021: INFO: Wrong image for pod: daemon-set-glprd. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 2 11:03:19.021: INFO: Pod daemon-set-glprd is not available May 2 11:03:19.025: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 2 11:03:20.020: INFO: Wrong image for pod: daemon-set-glprd. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 2 11:03:20.020: INFO: Pod daemon-set-glprd is not available May 2 11:03:20.024: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 2 11:03:21.020: INFO: Wrong image for pod: daemon-set-glprd. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 2 11:03:21.020: INFO: Pod daemon-set-glprd is not available May 2 11:03:21.024: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 2 11:03:22.037: INFO: Pod daemon-set-qvhjt is not available May 2 11:03:22.084: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node STEP: Check that daemon pods are still running on every node of the cluster. May 2 11:03:22.088: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 2 11:03:22.091: INFO: Number of nodes with available pods: 1 May 2 11:03:22.091: INFO: Node hunter-worker2 is running more than one daemon pod May 2 11:03:23.128: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 2 11:03:23.131: INFO: Number of nodes with available pods: 1 May 2 11:03:23.131: INFO: Node hunter-worker2 is running more than one daemon pod May 2 11:03:24.386: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 2 11:03:24.389: INFO: Number of nodes with available pods: 1 May 2 11:03:24.389: INFO: Node hunter-worker2 is running more than one daemon pod May 2 11:03:25.096: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 2 11:03:25.140: INFO: Number of nodes with available pods: 1 May 2 11:03:25.140: INFO: Node hunter-worker2 is running more than one daemon pod May 2 11:03:26.096: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 2 11:03:26.100: INFO: Number of nodes with available pods: 2 May 2 11:03:26.100: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-dwng6, will wait for the garbage collector to delete the pods May 2 11:03:26.283: INFO: Deleting DaemonSet.extensions daemon-set took: 7.02792ms May 2 11:03:26.583: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.272028ms May 2 11:03:41.807: INFO: Number of nodes with available pods: 0 May 2 11:03:41.807: INFO: Number of running nodes: 0, number of available pods: 0 May 2 11:03:41.887: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-dwng6/daemonsets","resourceVersion":"8334993"},"items":null} May 2 11:03:41.916: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-dwng6/pods","resourceVersion":"8334993"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 2 11:03:41.926: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-daemonsets-dwng6" for this suite. May 2 11:03:49.948: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 11:03:49.979: INFO: namespace: e2e-tests-daemonsets-dwng6, resource: bindings, ignored listing per whitelist May 2 11:03:50.027: INFO: namespace e2e-tests-daemonsets-dwng6 deletion completed in 8.097031944s • [SLOW TEST:46.239 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 2 11:03:50.027: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook May 2 11:03:58.252: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 2 11:03:58.282: INFO: Pod pod-with-prestop-exec-hook still exists May 2 11:04:00.283: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 2 11:04:00.287: INFO: Pod pod-with-prestop-exec-hook still exists May 2 11:04:02.282: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 2 11:04:02.335: INFO: Pod pod-with-prestop-exec-hook still exists May 2 11:04:04.282: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 2 11:04:04.286: INFO: Pod pod-with-prestop-exec-hook still exists May 2 11:04:06.283: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 2 11:04:06.287: INFO: Pod pod-with-prestop-exec-hook still exists May 2 11:04:08.282: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 2 11:04:08.286: INFO: Pod pod-with-prestop-exec-hook still exists May 2 11:04:10.283: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 2 11:04:10.287: INFO: Pod pod-with-prestop-exec-hook still exists May 2 11:04:12.283: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 2 11:04:12.348: INFO: Pod pod-with-prestop-exec-hook still exists May 2 11:04:14.283: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 2 11:04:14.287: INFO: Pod pod-with-prestop-exec-hook still exists May 2 11:04:16.283: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 2 11:04:16.287: INFO: Pod pod-with-prestop-exec-hook still exists May 2 11:04:18.283: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 2 11:04:18.287: INFO: Pod pod-with-prestop-exec-hook still exists May 2 11:04:20.283: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 2 11:04:20.287: INFO: Pod pod-with-prestop-exec-hook still exists May 2 11:04:22.283: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 2 11:04:22.287: INFO: Pod pod-with-prestop-exec-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 2 11:04:22.294: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-bsrg5" for this suite. May 2 11:04:48.313: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 11:04:48.349: INFO: namespace: e2e-tests-container-lifecycle-hook-bsrg5, resource: bindings, ignored listing per whitelist May 2 11:04:48.383: INFO: namespace e2e-tests-container-lifecycle-hook-bsrg5 deletion completed in 26.085556099s • [SLOW TEST:58.356 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40 should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 2 11:04:48.384: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin May 2 11:04:48.497: INFO: Waiting up to 5m0s for pod "downwardapi-volume-bd10fbae-8c64-11ea-8045-0242ac110017" in namespace "e2e-tests-downward-api-wx5gv" to be "success or failure" May 2 11:04:48.500: INFO: Pod "downwardapi-volume-bd10fbae-8c64-11ea-8045-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 3.423913ms May 2 11:04:50.504: INFO: Pod "downwardapi-volume-bd10fbae-8c64-11ea-8045-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006836318s May 2 11:04:52.509: INFO: Pod "downwardapi-volume-bd10fbae-8c64-11ea-8045-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01151347s STEP: Saw pod success May 2 11:04:52.509: INFO: Pod "downwardapi-volume-bd10fbae-8c64-11ea-8045-0242ac110017" satisfied condition "success or failure" May 2 11:04:52.512: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-bd10fbae-8c64-11ea-8045-0242ac110017 container client-container: STEP: delete the pod May 2 11:04:52.553: INFO: Waiting for pod downwardapi-volume-bd10fbae-8c64-11ea-8045-0242ac110017 to disappear May 2 11:04:52.610: INFO: Pod downwardapi-volume-bd10fbae-8c64-11ea-8045-0242ac110017 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 2 11:04:52.610: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-wx5gv" for this suite. May 2 11:04:58.626: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 11:04:58.646: INFO: namespace: e2e-tests-downward-api-wx5gv, resource: bindings, ignored listing per whitelist May 2 11:04:58.702: INFO: namespace e2e-tests-downward-api-wx5gv deletion completed in 6.088193153s • [SLOW TEST:10.318 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [k8s.io] Probing container should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 2 11:04:58.702: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod liveness-exec in namespace e2e-tests-container-probe-8jtdx May 2 11:05:02.830: INFO: Started pod liveness-exec in namespace e2e-tests-container-probe-8jtdx STEP: checking the pod's current state and verifying that restartCount is present May 2 11:05:02.833: INFO: Initial restart count of pod liveness-exec is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 2 11:09:03.342: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-8jtdx" for this suite. May 2 11:09:09.370: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 11:09:09.389: INFO: namespace: e2e-tests-container-probe-8jtdx, resource: bindings, ignored listing per whitelist May 2 11:09:09.450: INFO: namespace e2e-tests-container-probe-8jtdx deletion completed in 6.103821471s • [SLOW TEST:250.748 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 2 11:09:09.451: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:295 [It] should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the initial replication controller May 2 11:09:09.551: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-gpqgv' May 2 11:09:12.219: INFO: stderr: "" May 2 11:09:12.219: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. May 2 11:09:12.220: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-gpqgv' May 2 11:09:12.390: INFO: stderr: "" May 2 11:09:12.390: INFO: stdout: "update-demo-nautilus-42nfh update-demo-nautilus-77dkc " May 2 11:09:12.390: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-42nfh -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-gpqgv' May 2 11:09:12.482: INFO: stderr: "" May 2 11:09:12.482: INFO: stdout: "" May 2 11:09:12.482: INFO: update-demo-nautilus-42nfh is created but not running May 2 11:09:17.483: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-gpqgv' May 2 11:09:17.577: INFO: stderr: "" May 2 11:09:17.577: INFO: stdout: "update-demo-nautilus-42nfh update-demo-nautilus-77dkc " May 2 11:09:17.577: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-42nfh -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-gpqgv' May 2 11:09:17.688: INFO: stderr: "" May 2 11:09:17.689: INFO: stdout: "true" May 2 11:09:17.689: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-42nfh -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-gpqgv' May 2 11:09:17.790: INFO: stderr: "" May 2 11:09:17.790: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 2 11:09:17.790: INFO: validating pod update-demo-nautilus-42nfh May 2 11:09:17.795: INFO: got data: { "image": "nautilus.jpg" } May 2 11:09:17.795: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 2 11:09:17.795: INFO: update-demo-nautilus-42nfh is verified up and running May 2 11:09:17.795: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-77dkc -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-gpqgv' May 2 11:09:17.908: INFO: stderr: "" May 2 11:09:17.908: INFO: stdout: "true" May 2 11:09:17.908: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-77dkc -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-gpqgv' May 2 11:09:18.016: INFO: stderr: "" May 2 11:09:18.016: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 2 11:09:18.016: INFO: validating pod update-demo-nautilus-77dkc May 2 11:09:18.020: INFO: got data: { "image": "nautilus.jpg" } May 2 11:09:18.020: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 2 11:09:18.020: INFO: update-demo-nautilus-77dkc is verified up and running STEP: rolling-update to new replication controller May 2 11:09:18.023: INFO: scanned /root for discovery docs: May 2 11:09:18.023: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=e2e-tests-kubectl-gpqgv' May 2 11:09:40.728: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" May 2 11:09:40.728: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n" STEP: waiting for all containers in name=update-demo pods to come up. May 2 11:09:40.728: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-gpqgv' May 2 11:09:40.841: INFO: stderr: "" May 2 11:09:40.841: INFO: stdout: "update-demo-kitten-lpklm update-demo-kitten-rfvml " May 2 11:09:40.841: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-lpklm -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-gpqgv' May 2 11:09:40.964: INFO: stderr: "" May 2 11:09:40.964: INFO: stdout: "true" May 2 11:09:40.964: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-lpklm -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-gpqgv' May 2 11:09:41.084: INFO: stderr: "" May 2 11:09:41.084: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" May 2 11:09:41.084: INFO: validating pod update-demo-kitten-lpklm May 2 11:09:41.088: INFO: got data: { "image": "kitten.jpg" } May 2 11:09:41.088: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . May 2 11:09:41.088: INFO: update-demo-kitten-lpklm is verified up and running May 2 11:09:41.088: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-rfvml -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-gpqgv' May 2 11:09:41.187: INFO: stderr: "" May 2 11:09:41.187: INFO: stdout: "true" May 2 11:09:41.187: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-rfvml -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-gpqgv' May 2 11:09:41.277: INFO: stderr: "" May 2 11:09:41.277: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" May 2 11:09:41.277: INFO: validating pod update-demo-kitten-rfvml May 2 11:09:41.281: INFO: got data: { "image": "kitten.jpg" } May 2 11:09:41.281: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . May 2 11:09:41.281: INFO: update-demo-kitten-rfvml is verified up and running [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 2 11:09:41.281: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-gpqgv" for this suite. May 2 11:10:03.301: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 11:10:03.311: INFO: namespace: e2e-tests-kubectl-gpqgv, resource: bindings, ignored listing per whitelist May 2 11:10:03.376: INFO: namespace e2e-tests-kubectl-gpqgv deletion completed in 22.09188677s • [SLOW TEST:53.925 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 2 11:10:03.376: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:85 [It] should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating service multi-endpoint-test in namespace e2e-tests-services-khhdk STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-khhdk to expose endpoints map[] May 2 11:10:03.538: INFO: Get endpoints failed (11.832602ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found May 2 11:10:04.542: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-khhdk exposes endpoints map[] (1.01570594s elapsed) STEP: Creating pod pod1 in namespace e2e-tests-services-khhdk STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-khhdk to expose endpoints map[pod1:[100]] May 2 11:10:07.606: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-khhdk exposes endpoints map[pod1:[100]] (3.058343534s elapsed) STEP: Creating pod pod2 in namespace e2e-tests-services-khhdk STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-khhdk to expose endpoints map[pod1:[100] pod2:[101]] May 2 11:10:10.736: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-khhdk exposes endpoints map[pod2:[101] pod1:[100]] (3.126155277s elapsed) STEP: Deleting pod pod1 in namespace e2e-tests-services-khhdk STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-khhdk to expose endpoints map[pod2:[101]] May 2 11:10:11.803: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-khhdk exposes endpoints map[pod2:[101]] (1.062483296s elapsed) STEP: Deleting pod pod2 in namespace e2e-tests-services-khhdk STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-khhdk to expose endpoints map[] May 2 11:10:12.843: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-khhdk exposes endpoints map[] (1.035208189s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 2 11:10:12.905: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-services-khhdk" for this suite. May 2 11:10:34.943: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 11:10:35.007: INFO: namespace: e2e-tests-services-khhdk, resource: bindings, ignored listing per whitelist May 2 11:10:35.019: INFO: namespace e2e-tests-services-khhdk deletion completed in 22.108180028s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:90 • [SLOW TEST:31.642 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 2 11:10:35.020: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods STEP: Gathering metrics W0502 11:11:15.583779 6 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 2 11:11:15.583: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 2 11:11:15.583: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-djlqn" for this suite. May 2 11:11:27.677: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 11:11:27.688: INFO: namespace: e2e-tests-gc-djlqn, resource: bindings, ignored listing per whitelist May 2 11:11:27.750: INFO: namespace e2e-tests-gc-djlqn deletion completed in 12.163290933s • [SLOW TEST:52.730 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 2 11:11:27.750: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with configMap that has name projected-configmap-test-upd-ab2c708f-8c65-11ea-8045-0242ac110017 STEP: Creating the pod STEP: Updating configmap projected-configmap-test-upd-ab2c708f-8c65-11ea-8045-0242ac110017 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 2 11:13:03.028: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-wf6d7" for this suite. May 2 11:13:33.095: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 11:13:33.148: INFO: namespace: e2e-tests-projected-wf6d7, resource: bindings, ignored listing per whitelist May 2 11:13:33.165: INFO: namespace e2e-tests-projected-wf6d7 deletion completed in 30.133770906s • [SLOW TEST:125.415 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 2 11:13:33.166: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name projected-secret-test-f5da57a1-8c65-11ea-8045-0242ac110017 STEP: Creating a pod to test consume secrets May 2 11:13:33.277: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-f5dcb33e-8c65-11ea-8045-0242ac110017" in namespace "e2e-tests-projected-hf4wd" to be "success or failure" May 2 11:13:33.280: INFO: Pod "pod-projected-secrets-f5dcb33e-8c65-11ea-8045-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.839263ms May 2 11:13:35.287: INFO: Pod "pod-projected-secrets-f5dcb33e-8c65-11ea-8045-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009650727s May 2 11:13:37.291: INFO: Pod "pod-projected-secrets-f5dcb33e-8c65-11ea-8045-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013821923s STEP: Saw pod success May 2 11:13:37.291: INFO: Pod "pod-projected-secrets-f5dcb33e-8c65-11ea-8045-0242ac110017" satisfied condition "success or failure" May 2 11:13:37.293: INFO: Trying to get logs from node hunter-worker2 pod pod-projected-secrets-f5dcb33e-8c65-11ea-8045-0242ac110017 container secret-volume-test: STEP: delete the pod May 2 11:13:37.428: INFO: Waiting for pod pod-projected-secrets-f5dcb33e-8c65-11ea-8045-0242ac110017 to disappear May 2 11:13:37.613: INFO: Pod pod-projected-secrets-f5dcb33e-8c65-11ea-8045-0242ac110017 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 2 11:13:37.613: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-hf4wd" for this suite. May 2 11:13:43.661: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 11:13:43.741: INFO: namespace: e2e-tests-projected-hf4wd, resource: bindings, ignored listing per whitelist May 2 11:13:43.751: INFO: namespace e2e-tests-projected-hf4wd deletion completed in 6.134739108s • [SLOW TEST:10.585 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 2 11:13:43.751: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-fc2e5b4f-8c65-11ea-8045-0242ac110017 STEP: Creating a pod to test consume secrets May 2 11:13:43.935: INFO: Waiting up to 5m0s for pod "pod-secrets-fc2fe2de-8c65-11ea-8045-0242ac110017" in namespace "e2e-tests-secrets-484cz" to be "success or failure" May 2 11:13:43.946: INFO: Pod "pod-secrets-fc2fe2de-8c65-11ea-8045-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 11.51636ms May 2 11:13:45.950: INFO: Pod "pod-secrets-fc2fe2de-8c65-11ea-8045-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015520383s May 2 11:13:47.954: INFO: Pod "pod-secrets-fc2fe2de-8c65-11ea-8045-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 4.019356834s May 2 11:13:49.958: INFO: Pod "pod-secrets-fc2fe2de-8c65-11ea-8045-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.023453996s STEP: Saw pod success May 2 11:13:49.958: INFO: Pod "pod-secrets-fc2fe2de-8c65-11ea-8045-0242ac110017" satisfied condition "success or failure" May 2 11:13:49.961: INFO: Trying to get logs from node hunter-worker pod pod-secrets-fc2fe2de-8c65-11ea-8045-0242ac110017 container secret-volume-test: STEP: delete the pod May 2 11:13:49.990: INFO: Waiting for pod pod-secrets-fc2fe2de-8c65-11ea-8045-0242ac110017 to disappear May 2 11:13:50.051: INFO: Pod pod-secrets-fc2fe2de-8c65-11ea-8045-0242ac110017 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 2 11:13:50.051: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-484cz" for this suite. May 2 11:13:56.182: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 11:13:56.211: INFO: namespace: e2e-tests-secrets-484cz, resource: bindings, ignored listing per whitelist May 2 11:13:56.260: INFO: namespace e2e-tests-secrets-484cz deletion completed in 6.205584508s • [SLOW TEST:12.509 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 2 11:13:56.260: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0644 on tmpfs May 2 11:13:56.456: INFO: Waiting up to 5m0s for pod "pod-03ad42b8-8c66-11ea-8045-0242ac110017" in namespace "e2e-tests-emptydir-7z44m" to be "success or failure" May 2 11:13:56.473: INFO: Pod "pod-03ad42b8-8c66-11ea-8045-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 16.764237ms May 2 11:13:58.578: INFO: Pod "pod-03ad42b8-8c66-11ea-8045-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.122528782s May 2 11:14:00.583: INFO: Pod "pod-03ad42b8-8c66-11ea-8045-0242ac110017": Phase="Running", Reason="", readiness=true. Elapsed: 4.127038727s May 2 11:14:02.587: INFO: Pod "pod-03ad42b8-8c66-11ea-8045-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.131313075s STEP: Saw pod success May 2 11:14:02.587: INFO: Pod "pod-03ad42b8-8c66-11ea-8045-0242ac110017" satisfied condition "success or failure" May 2 11:14:02.591: INFO: Trying to get logs from node hunter-worker2 pod pod-03ad42b8-8c66-11ea-8045-0242ac110017 container test-container: STEP: delete the pod May 2 11:14:02.651: INFO: Waiting for pod pod-03ad42b8-8c66-11ea-8045-0242ac110017 to disappear May 2 11:14:02.658: INFO: Pod pod-03ad42b8-8c66-11ea-8045-0242ac110017 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 2 11:14:02.658: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-7z44m" for this suite. May 2 11:14:08.672: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 11:14:08.733: INFO: namespace: e2e-tests-emptydir-7z44m, resource: bindings, ignored listing per whitelist May 2 11:14:08.750: INFO: namespace e2e-tests-emptydir-7z44m deletion completed in 6.088585228s • [SLOW TEST:12.490 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 2 11:14:08.750: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating replication controller my-hostname-basic-0b121918-8c66-11ea-8045-0242ac110017 May 2 11:14:08.900: INFO: Pod name my-hostname-basic-0b121918-8c66-11ea-8045-0242ac110017: Found 0 pods out of 1 May 2 11:14:13.905: INFO: Pod name my-hostname-basic-0b121918-8c66-11ea-8045-0242ac110017: Found 1 pods out of 1 May 2 11:14:13.905: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-0b121918-8c66-11ea-8045-0242ac110017" are running May 2 11:14:13.908: INFO: Pod "my-hostname-basic-0b121918-8c66-11ea-8045-0242ac110017-ng8q5" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-02 11:14:08 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-02 11:14:12 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-02 11:14:12 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-02 11:14:08 +0000 UTC Reason: Message:}]) May 2 11:14:13.908: INFO: Trying to dial the pod May 2 11:14:18.927: INFO: Controller my-hostname-basic-0b121918-8c66-11ea-8045-0242ac110017: Got expected result from replica 1 [my-hostname-basic-0b121918-8c66-11ea-8045-0242ac110017-ng8q5]: "my-hostname-basic-0b121918-8c66-11ea-8045-0242ac110017-ng8q5", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 2 11:14:18.927: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-replication-controller-rjlfz" for this suite. May 2 11:14:24.985: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 11:14:25.008: INFO: namespace: e2e-tests-replication-controller-rjlfz, resource: bindings, ignored listing per whitelist May 2 11:14:25.060: INFO: namespace e2e-tests-replication-controller-rjlfz deletion completed in 6.129769549s • [SLOW TEST:16.310 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 2 11:14:25.060: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin May 2 11:14:25.167: INFO: Waiting up to 5m0s for pod "downwardapi-volume-14c7aac6-8c66-11ea-8045-0242ac110017" in namespace "e2e-tests-projected-gwfsh" to be "success or failure" May 2 11:14:25.225: INFO: Pod "downwardapi-volume-14c7aac6-8c66-11ea-8045-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 57.273725ms May 2 11:14:27.314: INFO: Pod "downwardapi-volume-14c7aac6-8c66-11ea-8045-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.146889314s May 2 11:14:29.319: INFO: Pod "downwardapi-volume-14c7aac6-8c66-11ea-8045-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.151199804s STEP: Saw pod success May 2 11:14:29.319: INFO: Pod "downwardapi-volume-14c7aac6-8c66-11ea-8045-0242ac110017" satisfied condition "success or failure" May 2 11:14:29.322: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-14c7aac6-8c66-11ea-8045-0242ac110017 container client-container: STEP: delete the pod May 2 11:14:29.454: INFO: Waiting for pod downwardapi-volume-14c7aac6-8c66-11ea-8045-0242ac110017 to disappear May 2 11:14:29.467: INFO: Pod downwardapi-volume-14c7aac6-8c66-11ea-8045-0242ac110017 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 2 11:14:29.467: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-gwfsh" for this suite. May 2 11:14:35.506: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 11:14:35.514: INFO: namespace: e2e-tests-projected-gwfsh, resource: bindings, ignored listing per whitelist May 2 11:14:35.583: INFO: namespace e2e-tests-projected-gwfsh deletion completed in 6.111959302s • [SLOW TEST:10.523 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 2 11:14:35.583: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0777 on node default medium May 2 11:14:35.716: INFO: Waiting up to 5m0s for pod "pod-1b12d70c-8c66-11ea-8045-0242ac110017" in namespace "e2e-tests-emptydir-gfm7l" to be "success or failure" May 2 11:14:35.744: INFO: Pod "pod-1b12d70c-8c66-11ea-8045-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 28.258437ms May 2 11:14:37.818: INFO: Pod "pod-1b12d70c-8c66-11ea-8045-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.101959535s May 2 11:14:39.822: INFO: Pod "pod-1b12d70c-8c66-11ea-8045-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 4.105710107s May 2 11:14:41.826: INFO: Pod "pod-1b12d70c-8c66-11ea-8045-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.109719777s STEP: Saw pod success May 2 11:14:41.826: INFO: Pod "pod-1b12d70c-8c66-11ea-8045-0242ac110017" satisfied condition "success or failure" May 2 11:14:41.829: INFO: Trying to get logs from node hunter-worker pod pod-1b12d70c-8c66-11ea-8045-0242ac110017 container test-container: STEP: delete the pod May 2 11:14:41.866: INFO: Waiting for pod pod-1b12d70c-8c66-11ea-8045-0242ac110017 to disappear May 2 11:14:41.887: INFO: Pod pod-1b12d70c-8c66-11ea-8045-0242ac110017 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 2 11:14:41.887: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-gfm7l" for this suite. May 2 11:14:47.937: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 11:14:47.959: INFO: namespace: e2e-tests-emptydir-gfm7l, resource: bindings, ignored listing per whitelist May 2 11:14:48.014: INFO: namespace e2e-tests-emptydir-gfm7l deletion completed in 6.123433192s • [SLOW TEST:12.432 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 2 11:14:48.015: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Given a Pod with a 'name' label pod-adoption is created STEP: When a replication controller with a matching selector is created STEP: Then the orphan pod is adopted [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 2 11:14:55.721: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-replication-controller-5pjw4" for this suite. May 2 11:15:17.741: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 11:15:17.791: INFO: namespace: e2e-tests-replication-controller-5pjw4, resource: bindings, ignored listing per whitelist May 2 11:15:17.815: INFO: namespace e2e-tests-replication-controller-5pjw4 deletion completed in 22.089481037s • [SLOW TEST:29.800 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl logs should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 2 11:15:17.815: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1134 STEP: creating an rc May 2 11:15:17.908: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-zfjl7' May 2 11:15:18.162: INFO: stderr: "" May 2 11:15:18.162: INFO: stdout: "replicationcontroller/redis-master created\n" [It] should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Waiting for Redis master to start. May 2 11:15:19.166: INFO: Selector matched 1 pods for map[app:redis] May 2 11:15:19.166: INFO: Found 0 / 1 May 2 11:15:20.166: INFO: Selector matched 1 pods for map[app:redis] May 2 11:15:20.166: INFO: Found 0 / 1 May 2 11:15:21.167: INFO: Selector matched 1 pods for map[app:redis] May 2 11:15:21.167: INFO: Found 0 / 1 May 2 11:15:22.167: INFO: Selector matched 1 pods for map[app:redis] May 2 11:15:22.167: INFO: Found 1 / 1 May 2 11:15:22.167: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 May 2 11:15:22.171: INFO: Selector matched 1 pods for map[app:redis] May 2 11:15:22.171: INFO: ForEach: Found 1 pods from the filter. Now looping through them. STEP: checking for a matching strings May 2 11:15:22.171: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-59wn5 redis-master --namespace=e2e-tests-kubectl-zfjl7' May 2 11:15:22.287: INFO: stderr: "" May 2 11:15:22.287: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 02 May 11:15:20.613 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 02 May 11:15:20.613 # Server started, Redis version 3.2.12\n1:M 02 May 11:15:20.613 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 02 May 11:15:20.613 * The server is now ready to accept connections on port 6379\n" STEP: limiting log lines May 2 11:15:22.287: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-59wn5 redis-master --namespace=e2e-tests-kubectl-zfjl7 --tail=1' May 2 11:15:22.402: INFO: stderr: "" May 2 11:15:22.402: INFO: stdout: "1:M 02 May 11:15:20.613 * The server is now ready to accept connections on port 6379\n" STEP: limiting log bytes May 2 11:15:22.402: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-59wn5 redis-master --namespace=e2e-tests-kubectl-zfjl7 --limit-bytes=1' May 2 11:15:22.521: INFO: stderr: "" May 2 11:15:22.521: INFO: stdout: " " STEP: exposing timestamps May 2 11:15:22.521: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-59wn5 redis-master --namespace=e2e-tests-kubectl-zfjl7 --tail=1 --timestamps' May 2 11:15:22.640: INFO: stderr: "" May 2 11:15:22.640: INFO: stdout: "2020-05-02T11:15:20.613496401Z 1:M 02 May 11:15:20.613 * The server is now ready to accept connections on port 6379\n" STEP: restricting to a time range May 2 11:15:25.141: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-59wn5 redis-master --namespace=e2e-tests-kubectl-zfjl7 --since=1s' May 2 11:15:25.254: INFO: stderr: "" May 2 11:15:25.254: INFO: stdout: "" May 2 11:15:25.254: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-59wn5 redis-master --namespace=e2e-tests-kubectl-zfjl7 --since=24h' May 2 11:15:25.355: INFO: stderr: "" May 2 11:15:25.355: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 02 May 11:15:20.613 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 02 May 11:15:20.613 # Server started, Redis version 3.2.12\n1:M 02 May 11:15:20.613 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 02 May 11:15:20.613 * The server is now ready to accept connections on port 6379\n" [AfterEach] [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1140 STEP: using delete to clean up resources May 2 11:15:25.355: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-zfjl7' May 2 11:15:25.455: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 2 11:15:25.455: INFO: stdout: "replicationcontroller \"redis-master\" force deleted\n" May 2 11:15:25.455: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=nginx --no-headers --namespace=e2e-tests-kubectl-zfjl7' May 2 11:15:25.560: INFO: stderr: "No resources found.\n" May 2 11:15:25.560: INFO: stdout: "" May 2 11:15:25.560: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=nginx --namespace=e2e-tests-kubectl-zfjl7 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 2 11:15:25.653: INFO: stderr: "" May 2 11:15:25.653: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 2 11:15:25.653: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-zfjl7" for this suite. May 2 11:15:31.720: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 11:15:31.786: INFO: namespace: e2e-tests-kubectl-zfjl7, resource: bindings, ignored listing per whitelist May 2 11:15:31.807: INFO: namespace e2e-tests-kubectl-zfjl7 deletion completed in 6.150982248s • [SLOW TEST:13.992 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 2 11:15:31.807: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0644 on node default medium May 2 11:15:31.908: INFO: Waiting up to 5m0s for pod "pod-3c91bf29-8c66-11ea-8045-0242ac110017" in namespace "e2e-tests-emptydir-wwptx" to be "success or failure" May 2 11:15:31.912: INFO: Pod "pod-3c91bf29-8c66-11ea-8045-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 3.852655ms May 2 11:15:33.916: INFO: Pod "pod-3c91bf29-8c66-11ea-8045-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007922094s May 2 11:15:35.920: INFO: Pod "pod-3c91bf29-8c66-11ea-8045-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012331851s STEP: Saw pod success May 2 11:15:35.920: INFO: Pod "pod-3c91bf29-8c66-11ea-8045-0242ac110017" satisfied condition "success or failure" May 2 11:15:35.924: INFO: Trying to get logs from node hunter-worker pod pod-3c91bf29-8c66-11ea-8045-0242ac110017 container test-container: STEP: delete the pod May 2 11:15:35.955: INFO: Waiting for pod pod-3c91bf29-8c66-11ea-8045-0242ac110017 to disappear May 2 11:15:35.984: INFO: Pod pod-3c91bf29-8c66-11ea-8045-0242ac110017 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 2 11:15:35.984: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-wwptx" for this suite. May 2 11:15:42.038: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 11:15:42.082: INFO: namespace: e2e-tests-emptydir-wwptx, resource: bindings, ignored listing per whitelist May 2 11:15:42.110: INFO: namespace e2e-tests-emptydir-wwptx deletion completed in 6.121779518s • [SLOW TEST:10.303 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 2 11:15:42.110: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102 [It] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. May 2 11:15:42.260: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 2 11:15:42.261: INFO: Number of nodes with available pods: 0 May 2 11:15:42.262: INFO: Node hunter-worker is running more than one daemon pod May 2 11:15:43.265: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 2 11:15:43.268: INFO: Number of nodes with available pods: 0 May 2 11:15:43.268: INFO: Node hunter-worker is running more than one daemon pod May 2 11:15:44.266: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 2 11:15:44.269: INFO: Number of nodes with available pods: 0 May 2 11:15:44.269: INFO: Node hunter-worker is running more than one daemon pod May 2 11:15:45.382: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 2 11:15:45.385: INFO: Number of nodes with available pods: 0 May 2 11:15:45.385: INFO: Node hunter-worker is running more than one daemon pod May 2 11:15:46.340: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 2 11:15:46.344: INFO: Number of nodes with available pods: 2 May 2 11:15:46.344: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. May 2 11:15:46.382: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 2 11:15:46.387: INFO: Number of nodes with available pods: 1 May 2 11:15:46.387: INFO: Node hunter-worker2 is running more than one daemon pod May 2 11:15:47.391: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 2 11:15:47.394: INFO: Number of nodes with available pods: 1 May 2 11:15:47.394: INFO: Node hunter-worker2 is running more than one daemon pod May 2 11:15:48.820: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 2 11:15:49.513: INFO: Number of nodes with available pods: 1 May 2 11:15:49.513: INFO: Node hunter-worker2 is running more than one daemon pod May 2 11:15:50.391: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 2 11:15:50.394: INFO: Number of nodes with available pods: 1 May 2 11:15:50.394: INFO: Node hunter-worker2 is running more than one daemon pod May 2 11:15:51.392: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 2 11:15:51.395: INFO: Number of nodes with available pods: 2 May 2 11:15:51.395: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Wait for the failed daemon pod to be completely deleted. [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-cbrm8, will wait for the garbage collector to delete the pods May 2 11:15:51.461: INFO: Deleting DaemonSet.extensions daemon-set took: 6.699295ms May 2 11:15:51.562: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.302594ms May 2 11:16:01.865: INFO: Number of nodes with available pods: 0 May 2 11:16:01.865: INFO: Number of running nodes: 0, number of available pods: 0 May 2 11:16:01.868: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-cbrm8/daemonsets","resourceVersion":"8337219"},"items":null} May 2 11:16:01.870: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-cbrm8/pods","resourceVersion":"8337219"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 2 11:16:01.929: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-daemonsets-cbrm8" for this suite. May 2 11:16:07.964: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 11:16:08.019: INFO: namespace: e2e-tests-daemonsets-cbrm8, resource: bindings, ignored listing per whitelist May 2 11:16:08.036: INFO: namespace e2e-tests-daemonsets-cbrm8 deletion completed in 6.081009694s • [SLOW TEST:25.926 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run default should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 2 11:16:08.036: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1262 [It] should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine May 2 11:16:08.137: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --namespace=e2e-tests-kubectl-zlbd2' May 2 11:16:08.243: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" May 2 11:16:08.243: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n" STEP: verifying the pod controlled by e2e-test-nginx-deployment gets created [AfterEach] [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1268 May 2 11:16:10.295: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=e2e-tests-kubectl-zlbd2' May 2 11:16:10.465: INFO: stderr: "" May 2 11:16:10.465: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 2 11:16:10.465: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-zlbd2" for this suite. May 2 11:18:12.805: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 11:18:12.815: INFO: namespace: e2e-tests-kubectl-zlbd2, resource: bindings, ignored listing per whitelist May 2 11:18:12.893: INFO: namespace e2e-tests-kubectl-zlbd2 deletion completed in 2m2.424757781s • [SLOW TEST:124.857 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 2 11:18:12.893: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpa': should get the expected 'State' STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpof': should get the expected 'State' STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpn': should get the expected 'State' STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance] [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 2 11:18:45.729: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-runtime-ggnsc" for this suite. May 2 11:18:51.921: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 11:18:51.992: INFO: namespace: e2e-tests-container-runtime-ggnsc, resource: bindings, ignored listing per whitelist May 2 11:18:52.002: INFO: namespace e2e-tests-container-runtime-ggnsc deletion completed in 6.26965227s • [SLOW TEST:39.108 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:37 when starting a container that exits /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Proxy server should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 2 11:18:52.002: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Starting the proxy May 2 11:18:52.147: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix093357568/test' STEP: retrieving proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 2 11:18:52.214: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-647z7" for this suite. May 2 11:18:58.246: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 11:18:58.290: INFO: namespace: e2e-tests-kubectl-647z7, resource: bindings, ignored listing per whitelist May 2 11:18:58.323: INFO: namespace e2e-tests-kubectl-647z7 deletion completed in 6.10532933s • [SLOW TEST:6.321 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Proxy server /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 2 11:18:58.323: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the rc STEP: delete the rc STEP: wait for all pods to be garbage collected STEP: Gathering metrics W0502 11:19:08.450427 6 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 2 11:19:08.450: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 2 11:19:08.450: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-t8b25" for this suite. May 2 11:19:14.483: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 11:19:14.496: INFO: namespace: e2e-tests-gc-t8b25, resource: bindings, ignored listing per whitelist May 2 11:19:14.563: INFO: namespace e2e-tests-gc-t8b25 deletion completed in 6.109773707s • [SLOW TEST:16.240 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 2 11:19:14.563: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79 May 2 11:19:14.687: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 2 11:19:14.711: INFO: Waiting for terminating namespaces to be deleted... May 2 11:19:14.714: INFO: Logging pods the kubelet thinks is on node hunter-worker before test May 2 11:19:14.722: INFO: kube-proxy-szbng from kube-system started at 2020-03-15 18:23:11 +0000 UTC (1 container statuses recorded) May 2 11:19:14.722: INFO: Container kube-proxy ready: true, restart count 0 May 2 11:19:14.722: INFO: kindnet-54h7m from kube-system started at 2020-03-15 18:23:12 +0000 UTC (1 container statuses recorded) May 2 11:19:14.722: INFO: Container kindnet-cni ready: true, restart count 0 May 2 11:19:14.722: INFO: coredns-54ff9cd656-4h7lb from kube-system started at 2020-03-15 18:23:32 +0000 UTC (1 container statuses recorded) May 2 11:19:14.722: INFO: Container coredns ready: true, restart count 0 May 2 11:19:14.722: INFO: Logging pods the kubelet thinks is on node hunter-worker2 before test May 2 11:19:14.729: INFO: kube-proxy-s52ll from kube-system started at 2020-03-15 18:23:12 +0000 UTC (1 container statuses recorded) May 2 11:19:14.729: INFO: Container kube-proxy ready: true, restart count 0 May 2 11:19:14.729: INFO: kindnet-mtqrs from kube-system started at 2020-03-15 18:23:12 +0000 UTC (1 container statuses recorded) May 2 11:19:14.729: INFO: Container kindnet-cni ready: true, restart count 0 May 2 11:19:14.729: INFO: coredns-54ff9cd656-8vrkk from kube-system started at 2020-03-15 18:23:32 +0000 UTC (1 container statuses recorded) May 2 11:19:14.729: INFO: Container coredns ready: true, restart count 0 [It] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-c3d05c1e-8c66-11ea-8045-0242ac110017 42 STEP: Trying to relaunch the pod, now with labels. STEP: removing the label kubernetes.io/e2e-c3d05c1e-8c66-11ea-8045-0242ac110017 off the node hunter-worker2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-c3d05c1e-8c66-11ea-8045-0242ac110017 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 2 11:19:22.878: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-sched-pred-xtrlj" for this suite. May 2 11:19:50.894: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 11:19:50.912: INFO: namespace: e2e-tests-sched-pred-xtrlj, resource: bindings, ignored listing per whitelist May 2 11:19:50.971: INFO: namespace e2e-tests-sched-pred-xtrlj deletion completed in 28.08957858s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70 • [SLOW TEST:36.408 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22 validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 2 11:19:50.972: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0644 on node default medium May 2 11:19:51.127: INFO: Waiting up to 5m0s for pod "pod-d70d681e-8c66-11ea-8045-0242ac110017" in namespace "e2e-tests-emptydir-txsht" to be "success or failure" May 2 11:19:51.131: INFO: Pod "pod-d70d681e-8c66-11ea-8045-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 3.410053ms May 2 11:19:53.135: INFO: Pod "pod-d70d681e-8c66-11ea-8045-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007285652s May 2 11:19:55.139: INFO: Pod "pod-d70d681e-8c66-11ea-8045-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011383953s STEP: Saw pod success May 2 11:19:55.139: INFO: Pod "pod-d70d681e-8c66-11ea-8045-0242ac110017" satisfied condition "success or failure" May 2 11:19:55.142: INFO: Trying to get logs from node hunter-worker pod pod-d70d681e-8c66-11ea-8045-0242ac110017 container test-container: STEP: delete the pod May 2 11:19:55.189: INFO: Waiting for pod pod-d70d681e-8c66-11ea-8045-0242ac110017 to disappear May 2 11:19:55.289: INFO: Pod pod-d70d681e-8c66-11ea-8045-0242ac110017 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 2 11:19:55.289: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-txsht" for this suite. May 2 11:20:01.311: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 11:20:01.374: INFO: namespace: e2e-tests-emptydir-txsht, resource: bindings, ignored listing per whitelist May 2 11:20:01.392: INFO: namespace e2e-tests-emptydir-txsht deletion completed in 6.099981035s • [SLOW TEST:10.421 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 2 11:20:01.393: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with projected pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod pod-subpath-test-projected-8vwg STEP: Creating a pod to test atomic-volume-subpath May 2 11:20:01.624: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-8vwg" in namespace "e2e-tests-subpath-ghnss" to be "success or failure" May 2 11:20:01.627: INFO: Pod "pod-subpath-test-projected-8vwg": Phase="Pending", Reason="", readiness=false. Elapsed: 3.027948ms May 2 11:20:03.631: INFO: Pod "pod-subpath-test-projected-8vwg": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006857209s May 2 11:20:05.635: INFO: Pod "pod-subpath-test-projected-8vwg": Phase="Pending", Reason="", readiness=false. Elapsed: 4.011095586s May 2 11:20:07.640: INFO: Pod "pod-subpath-test-projected-8vwg": Phase="Pending", Reason="", readiness=false. Elapsed: 6.015886356s May 2 11:20:09.644: INFO: Pod "pod-subpath-test-projected-8vwg": Phase="Running", Reason="", readiness=false. Elapsed: 8.0201572s May 2 11:20:11.648: INFO: Pod "pod-subpath-test-projected-8vwg": Phase="Running", Reason="", readiness=false. Elapsed: 10.024148058s May 2 11:20:13.653: INFO: Pod "pod-subpath-test-projected-8vwg": Phase="Running", Reason="", readiness=false. Elapsed: 12.028351918s May 2 11:20:15.657: INFO: Pod "pod-subpath-test-projected-8vwg": Phase="Running", Reason="", readiness=false. Elapsed: 14.032891269s May 2 11:20:17.662: INFO: Pod "pod-subpath-test-projected-8vwg": Phase="Running", Reason="", readiness=false. Elapsed: 16.037451793s May 2 11:20:19.666: INFO: Pod "pod-subpath-test-projected-8vwg": Phase="Running", Reason="", readiness=false. Elapsed: 18.041911394s May 2 11:20:21.671: INFO: Pod "pod-subpath-test-projected-8vwg": Phase="Running", Reason="", readiness=false. Elapsed: 20.046513717s May 2 11:20:23.675: INFO: Pod "pod-subpath-test-projected-8vwg": Phase="Running", Reason="", readiness=false. Elapsed: 22.050870278s May 2 11:20:25.679: INFO: Pod "pod-subpath-test-projected-8vwg": Phase="Running", Reason="", readiness=false. Elapsed: 24.054897388s May 2 11:20:27.691: INFO: Pod "pod-subpath-test-projected-8vwg": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.067168312s STEP: Saw pod success May 2 11:20:27.691: INFO: Pod "pod-subpath-test-projected-8vwg" satisfied condition "success or failure" May 2 11:20:27.695: INFO: Trying to get logs from node hunter-worker2 pod pod-subpath-test-projected-8vwg container test-container-subpath-projected-8vwg: STEP: delete the pod May 2 11:20:27.741: INFO: Waiting for pod pod-subpath-test-projected-8vwg to disappear May 2 11:20:27.792: INFO: Pod pod-subpath-test-projected-8vwg no longer exists STEP: Deleting pod pod-subpath-test-projected-8vwg May 2 11:20:27.792: INFO: Deleting pod "pod-subpath-test-projected-8vwg" in namespace "e2e-tests-subpath-ghnss" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 2 11:20:27.796: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-subpath-ghnss" for this suite. May 2 11:20:33.814: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 11:20:33.884: INFO: namespace: e2e-tests-subpath-ghnss, resource: bindings, ignored listing per whitelist May 2 11:20:33.892: INFO: namespace e2e-tests-subpath-ghnss deletion completed in 6.091883996s • [SLOW TEST:32.499 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with projected pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 2 11:20:33.892: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102 [It] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 2 11:20:34.050: INFO: Creating daemon "daemon-set" with a node selector STEP: Initially, daemon pods should not be running on any nodes. May 2 11:20:34.059: INFO: Number of nodes with available pods: 0 May 2 11:20:34.059: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Change node label to blue, check that daemon pod is launched. May 2 11:20:34.092: INFO: Number of nodes with available pods: 0 May 2 11:20:34.092: INFO: Node hunter-worker is running more than one daemon pod May 2 11:20:35.097: INFO: Number of nodes with available pods: 0 May 2 11:20:35.097: INFO: Node hunter-worker is running more than one daemon pod May 2 11:20:36.104: INFO: Number of nodes with available pods: 0 May 2 11:20:36.104: INFO: Node hunter-worker is running more than one daemon pod May 2 11:20:37.096: INFO: Number of nodes with available pods: 1 May 2 11:20:37.096: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Update the node label to green, and wait for daemons to be unscheduled May 2 11:20:37.146: INFO: Number of nodes with available pods: 1 May 2 11:20:37.147: INFO: Number of running nodes: 0, number of available pods: 1 May 2 11:20:38.151: INFO: Number of nodes with available pods: 0 May 2 11:20:38.151: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate May 2 11:20:38.167: INFO: Number of nodes with available pods: 0 May 2 11:20:38.167: INFO: Node hunter-worker is running more than one daemon pod May 2 11:20:39.171: INFO: Number of nodes with available pods: 0 May 2 11:20:39.171: INFO: Node hunter-worker is running more than one daemon pod May 2 11:20:40.172: INFO: Number of nodes with available pods: 0 May 2 11:20:40.172: INFO: Node hunter-worker is running more than one daemon pod May 2 11:20:41.172: INFO: Number of nodes with available pods: 0 May 2 11:20:41.172: INFO: Node hunter-worker is running more than one daemon pod May 2 11:20:42.172: INFO: Number of nodes with available pods: 0 May 2 11:20:42.172: INFO: Node hunter-worker is running more than one daemon pod May 2 11:20:43.171: INFO: Number of nodes with available pods: 0 May 2 11:20:43.171: INFO: Node hunter-worker is running more than one daemon pod May 2 11:20:44.172: INFO: Number of nodes with available pods: 0 May 2 11:20:44.172: INFO: Node hunter-worker is running more than one daemon pod May 2 11:20:45.171: INFO: Number of nodes with available pods: 0 May 2 11:20:45.171: INFO: Node hunter-worker is running more than one daemon pod May 2 11:20:46.172: INFO: Number of nodes with available pods: 0 May 2 11:20:46.172: INFO: Node hunter-worker is running more than one daemon pod May 2 11:20:47.172: INFO: Number of nodes with available pods: 0 May 2 11:20:47.172: INFO: Node hunter-worker is running more than one daemon pod May 2 11:20:48.172: INFO: Number of nodes with available pods: 0 May 2 11:20:48.172: INFO: Node hunter-worker is running more than one daemon pod May 2 11:20:49.171: INFO: Number of nodes with available pods: 0 May 2 11:20:49.171: INFO: Node hunter-worker is running more than one daemon pod May 2 11:20:50.172: INFO: Number of nodes with available pods: 0 May 2 11:20:50.172: INFO: Node hunter-worker is running more than one daemon pod May 2 11:20:51.172: INFO: Number of nodes with available pods: 0 May 2 11:20:51.172: INFO: Node hunter-worker is running more than one daemon pod May 2 11:20:52.172: INFO: Number of nodes with available pods: 0 May 2 11:20:52.172: INFO: Node hunter-worker is running more than one daemon pod May 2 11:20:53.172: INFO: Number of nodes with available pods: 0 May 2 11:20:53.172: INFO: Node hunter-worker is running more than one daemon pod May 2 11:20:54.173: INFO: Number of nodes with available pods: 0 May 2 11:20:54.173: INFO: Node hunter-worker is running more than one daemon pod May 2 11:20:55.171: INFO: Number of nodes with available pods: 1 May 2 11:20:55.171: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-988wh, will wait for the garbage collector to delete the pods May 2 11:20:55.235: INFO: Deleting DaemonSet.extensions daemon-set took: 6.465727ms May 2 11:20:55.335: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.244394ms May 2 11:21:01.338: INFO: Number of nodes with available pods: 0 May 2 11:21:01.338: INFO: Number of running nodes: 0, number of available pods: 0 May 2 11:21:01.340: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-988wh/daemonsets","resourceVersion":"8338140"},"items":null} May 2 11:21:01.342: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-988wh/pods","resourceVersion":"8338140"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 2 11:21:01.376: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-daemonsets-988wh" for this suite. May 2 11:21:07.399: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 11:21:07.414: INFO: namespace: e2e-tests-daemonsets-988wh, resource: bindings, ignored listing per whitelist May 2 11:21:07.464: INFO: namespace e2e-tests-daemonsets-988wh deletion completed in 6.085044521s • [SLOW TEST:33.572 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 2 11:21:07.464: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: starting an echo server on multiple ports STEP: creating replication controller proxy-service-7xtdv in namespace e2e-tests-proxy-h85bz I0502 11:21:07.656743 6 runners.go:184] Created replication controller with name: proxy-service-7xtdv, namespace: e2e-tests-proxy-h85bz, replica count: 1 I0502 11:21:08.707135 6 runners.go:184] proxy-service-7xtdv Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0502 11:21:09.707329 6 runners.go:184] proxy-service-7xtdv Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0502 11:21:10.707616 6 runners.go:184] proxy-service-7xtdv Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0502 11:21:11.707885 6 runners.go:184] proxy-service-7xtdv Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0502 11:21:12.708288 6 runners.go:184] proxy-service-7xtdv Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0502 11:21:13.708510 6 runners.go:184] proxy-service-7xtdv Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0502 11:21:14.708755 6 runners.go:184] proxy-service-7xtdv Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0502 11:21:15.708986 6 runners.go:184] proxy-service-7xtdv Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0502 11:21:16.709389 6 runners.go:184] proxy-service-7xtdv Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 2 11:21:16.713: INFO: setup took 9.105174495s, starting test cases STEP: running 16 cases, 20 attempts per case, 320 total attempts May 2 11:21:16.721: INFO: (0) /api/v1/namespaces/e2e-tests-proxy-h85bz/pods/proxy-service-7xtdv-b5mhh:162/proxy/: bar (200; 7.487539ms) May 2 11:21:16.721: INFO: (0) /api/v1/namespaces/e2e-tests-proxy-h85bz/services/proxy-service-7xtdv:portname2/proxy/: bar (200; 8.153302ms) May 2 11:21:16.721: INFO: (0) /api/v1/namespaces/e2e-tests-proxy-h85bz/services/proxy-service-7xtdv:portname1/proxy/: foo (200; 8.157493ms) May 2 11:21:16.725: INFO: (0) /api/v1/namespaces/e2e-tests-proxy-h85bz/services/http:proxy-service-7xtdv:portname2/proxy/: bar (200; 11.722875ms) May 2 11:21:16.725: INFO: (0) /api/v1/namespaces/e2e-tests-proxy-h85bz/services/http:proxy-service-7xtdv:portname1/proxy/: foo (200; 12.015466ms) May 2 11:21:16.725: INFO: (0) /api/v1/namespaces/e2e-tests-proxy-h85bz/pods/proxy-service-7xtdv-b5mhh:160/proxy/: foo (200; 11.884289ms) May 2 11:21:16.725: INFO: (0) /api/v1/namespaces/e2e-tests-proxy-h85bz/pods/http:proxy-service-7xtdv-b5mhh:162/proxy/: bar (200; 11.910546ms) May 2 11:21:16.725: INFO: (0) /api/v1/namespaces/e2e-tests-proxy-h85bz/pods/proxy-service-7xtdv-b5mhh/proxy/: >> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74 STEP: Creating service test in namespace e2e-tests-statefulset-745sk [It] Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating stateful set ss in namespace e2e-tests-statefulset-745sk STEP: Waiting until all stateful set ss replicas will be running in namespace e2e-tests-statefulset-745sk May 2 11:21:27.594: INFO: Found 0 stateful pods, waiting for 1 May 2 11:21:37.599: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod May 2 11:21:37.602: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-745sk ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' May 2 11:21:37.817: INFO: stderr: "I0502 11:21:37.727259 1626 log.go:172] (0xc000138630) (0xc000704640) Create stream\nI0502 11:21:37.727318 1626 log.go:172] (0xc000138630) (0xc000704640) Stream added, broadcasting: 1\nI0502 11:21:37.729773 1626 log.go:172] (0xc000138630) Reply frame received for 1\nI0502 11:21:37.729816 1626 log.go:172] (0xc000138630) (0xc00057ce60) Create stream\nI0502 11:21:37.729829 1626 log.go:172] (0xc000138630) (0xc00057ce60) Stream added, broadcasting: 3\nI0502 11:21:37.730651 1626 log.go:172] (0xc000138630) Reply frame received for 3\nI0502 11:21:37.730686 1626 log.go:172] (0xc000138630) (0xc00057cfa0) Create stream\nI0502 11:21:37.730701 1626 log.go:172] (0xc000138630) (0xc00057cfa0) Stream added, broadcasting: 5\nI0502 11:21:37.731425 1626 log.go:172] (0xc000138630) Reply frame received for 5\nI0502 11:21:37.811179 1626 log.go:172] (0xc000138630) Data frame received for 3\nI0502 11:21:37.811230 1626 log.go:172] (0xc00057ce60) (3) Data frame handling\nI0502 11:21:37.811269 1626 log.go:172] (0xc00057ce60) (3) Data frame sent\nI0502 11:21:37.811306 1626 log.go:172] (0xc000138630) Data frame received for 3\nI0502 11:21:37.811334 1626 log.go:172] (0xc00057ce60) (3) Data frame handling\nI0502 11:21:37.811459 1626 log.go:172] (0xc000138630) Data frame received for 5\nI0502 11:21:37.811492 1626 log.go:172] (0xc00057cfa0) (5) Data frame handling\nI0502 11:21:37.812683 1626 log.go:172] (0xc000138630) Data frame received for 1\nI0502 11:21:37.812699 1626 log.go:172] (0xc000704640) (1) Data frame handling\nI0502 11:21:37.812709 1626 log.go:172] (0xc000704640) (1) Data frame sent\nI0502 11:21:37.812716 1626 log.go:172] (0xc000138630) (0xc000704640) Stream removed, broadcasting: 1\nI0502 11:21:37.812738 1626 log.go:172] (0xc000138630) Go away received\nI0502 11:21:37.812980 1626 log.go:172] (0xc000138630) (0xc000704640) Stream removed, broadcasting: 1\nI0502 11:21:37.813005 1626 log.go:172] (0xc000138630) (0xc00057ce60) Stream removed, broadcasting: 3\nI0502 11:21:37.813017 1626 log.go:172] (0xc000138630) (0xc00057cfa0) Stream removed, broadcasting: 5\n" May 2 11:21:37.817: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" May 2 11:21:37.817: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' May 2 11:21:37.821: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true May 2 11:21:47.824: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false May 2 11:21:47.824: INFO: Waiting for statefulset status.replicas updated to 0 May 2 11:21:47.869: INFO: POD NODE PHASE GRACE CONDITIONS May 2 11:21:47.870: INFO: ss-0 hunter-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-02 11:21:27 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-02 11:21:38 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-02 11:21:38 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-02 11:21:27 +0000 UTC }] May 2 11:21:47.870: INFO: May 2 11:21:47.870: INFO: StatefulSet ss has not reached scale 3, at 1 May 2 11:21:48.875: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.964786345s May 2 11:21:50.028: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.959610592s May 2 11:21:51.033: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.806093144s May 2 11:21:52.076: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.80113945s May 2 11:21:53.082: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.758263655s May 2 11:21:54.087: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.752598379s May 2 11:21:55.299: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.747628102s May 2 11:21:56.304: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.53547904s May 2 11:21:57.309: INFO: Verifying statefulset ss doesn't scale past 3 for another 530.765935ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace e2e-tests-statefulset-745sk May 2 11:21:58.314: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-745sk ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 2 11:21:58.524: INFO: stderr: "I0502 11:21:58.439285 1649 log.go:172] (0xc000728160) (0xc00011d360) Create stream\nI0502 11:21:58.439343 1649 log.go:172] (0xc000728160) (0xc00011d360) Stream added, broadcasting: 1\nI0502 11:21:58.441841 1649 log.go:172] (0xc000728160) Reply frame received for 1\nI0502 11:21:58.441887 1649 log.go:172] (0xc000728160) (0xc000206000) Create stream\nI0502 11:21:58.441900 1649 log.go:172] (0xc000728160) (0xc000206000) Stream added, broadcasting: 3\nI0502 11:21:58.442702 1649 log.go:172] (0xc000728160) Reply frame received for 3\nI0502 11:21:58.442734 1649 log.go:172] (0xc000728160) (0xc00011d400) Create stream\nI0502 11:21:58.442746 1649 log.go:172] (0xc000728160) (0xc00011d400) Stream added, broadcasting: 5\nI0502 11:21:58.443686 1649 log.go:172] (0xc000728160) Reply frame received for 5\nI0502 11:21:58.519435 1649 log.go:172] (0xc000728160) Data frame received for 5\nI0502 11:21:58.519469 1649 log.go:172] (0xc00011d400) (5) Data frame handling\nI0502 11:21:58.519499 1649 log.go:172] (0xc000728160) Data frame received for 3\nI0502 11:21:58.519508 1649 log.go:172] (0xc000206000) (3) Data frame handling\nI0502 11:21:58.519515 1649 log.go:172] (0xc000206000) (3) Data frame sent\nI0502 11:21:58.519522 1649 log.go:172] (0xc000728160) Data frame received for 3\nI0502 11:21:58.519528 1649 log.go:172] (0xc000206000) (3) Data frame handling\nI0502 11:21:58.520655 1649 log.go:172] (0xc000728160) Data frame received for 1\nI0502 11:21:58.520694 1649 log.go:172] (0xc00011d360) (1) Data frame handling\nI0502 11:21:58.520716 1649 log.go:172] (0xc00011d360) (1) Data frame sent\nI0502 11:21:58.520747 1649 log.go:172] (0xc000728160) (0xc00011d360) Stream removed, broadcasting: 1\nI0502 11:21:58.520774 1649 log.go:172] (0xc000728160) Go away received\nI0502 11:21:58.520964 1649 log.go:172] (0xc000728160) (0xc00011d360) Stream removed, broadcasting: 1\nI0502 11:21:58.520989 1649 log.go:172] (0xc000728160) (0xc000206000) Stream removed, broadcasting: 3\nI0502 11:21:58.521001 1649 log.go:172] (0xc000728160) (0xc00011d400) Stream removed, broadcasting: 5\n" May 2 11:21:58.524: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" May 2 11:21:58.524: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' May 2 11:21:58.524: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-745sk ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 2 11:21:58.714: INFO: stderr: "I0502 11:21:58.643951 1672 log.go:172] (0xc0008222c0) (0xc00066d360) Create stream\nI0502 11:21:58.644008 1672 log.go:172] (0xc0008222c0) (0xc00066d360) Stream added, broadcasting: 1\nI0502 11:21:58.646671 1672 log.go:172] (0xc0008222c0) Reply frame received for 1\nI0502 11:21:58.646727 1672 log.go:172] (0xc0008222c0) (0xc00066d400) Create stream\nI0502 11:21:58.646742 1672 log.go:172] (0xc0008222c0) (0xc00066d400) Stream added, broadcasting: 3\nI0502 11:21:58.647937 1672 log.go:172] (0xc0008222c0) Reply frame received for 3\nI0502 11:21:58.647993 1672 log.go:172] (0xc0008222c0) (0xc0002f4000) Create stream\nI0502 11:21:58.648016 1672 log.go:172] (0xc0008222c0) (0xc0002f4000) Stream added, broadcasting: 5\nI0502 11:21:58.649422 1672 log.go:172] (0xc0008222c0) Reply frame received for 5\nI0502 11:21:58.708961 1672 log.go:172] (0xc0008222c0) Data frame received for 3\nI0502 11:21:58.709007 1672 log.go:172] (0xc00066d400) (3) Data frame handling\nI0502 11:21:58.709015 1672 log.go:172] (0xc00066d400) (3) Data frame sent\nI0502 11:21:58.709020 1672 log.go:172] (0xc0008222c0) Data frame received for 3\nI0502 11:21:58.709029 1672 log.go:172] (0xc00066d400) (3) Data frame handling\nI0502 11:21:58.709068 1672 log.go:172] (0xc0008222c0) Data frame received for 5\nI0502 11:21:58.709082 1672 log.go:172] (0xc0002f4000) (5) Data frame handling\nI0502 11:21:58.709097 1672 log.go:172] (0xc0002f4000) (5) Data frame sent\nI0502 11:21:58.709105 1672 log.go:172] (0xc0008222c0) Data frame received for 5\nmv: can't rename '/tmp/index.html': No such file or directory\nI0502 11:21:58.709236 1672 log.go:172] (0xc0002f4000) (5) Data frame handling\nI0502 11:21:58.710817 1672 log.go:172] (0xc0008222c0) Data frame received for 1\nI0502 11:21:58.710841 1672 log.go:172] (0xc00066d360) (1) Data frame handling\nI0502 11:21:58.710867 1672 log.go:172] (0xc00066d360) (1) Data frame sent\nI0502 11:21:58.710898 1672 log.go:172] (0xc0008222c0) (0xc00066d360) Stream removed, broadcasting: 1\nI0502 11:21:58.710927 1672 log.go:172] (0xc0008222c0) Go away received\nI0502 11:21:58.711167 1672 log.go:172] (0xc0008222c0) (0xc00066d360) Stream removed, broadcasting: 1\nI0502 11:21:58.711182 1672 log.go:172] (0xc0008222c0) (0xc00066d400) Stream removed, broadcasting: 3\nI0502 11:21:58.711188 1672 log.go:172] (0xc0008222c0) (0xc0002f4000) Stream removed, broadcasting: 5\n" May 2 11:21:58.715: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" May 2 11:21:58.715: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' May 2 11:21:58.715: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-745sk ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 2 11:21:58.915: INFO: stderr: "I0502 11:21:58.841400 1694 log.go:172] (0xc0001386e0) (0xc00077d4a0) Create stream\nI0502 11:21:58.841464 1694 log.go:172] (0xc0001386e0) (0xc00077d4a0) Stream added, broadcasting: 1\nI0502 11:21:58.844365 1694 log.go:172] (0xc0001386e0) Reply frame received for 1\nI0502 11:21:58.844399 1694 log.go:172] (0xc0001386e0) (0xc0006de000) Create stream\nI0502 11:21:58.844410 1694 log.go:172] (0xc0001386e0) (0xc0006de000) Stream added, broadcasting: 3\nI0502 11:21:58.845372 1694 log.go:172] (0xc0001386e0) Reply frame received for 3\nI0502 11:21:58.845420 1694 log.go:172] (0xc0001386e0) (0xc0007ac000) Create stream\nI0502 11:21:58.845433 1694 log.go:172] (0xc0001386e0) (0xc0007ac000) Stream added, broadcasting: 5\nI0502 11:21:58.846437 1694 log.go:172] (0xc0001386e0) Reply frame received for 5\nI0502 11:21:58.908897 1694 log.go:172] (0xc0001386e0) Data frame received for 3\nI0502 11:21:58.908938 1694 log.go:172] (0xc0006de000) (3) Data frame handling\nI0502 11:21:58.908964 1694 log.go:172] (0xc0006de000) (3) Data frame sent\nI0502 11:21:58.908978 1694 log.go:172] (0xc0001386e0) Data frame received for 3\nI0502 11:21:58.908995 1694 log.go:172] (0xc0006de000) (3) Data frame handling\nI0502 11:21:58.909325 1694 log.go:172] (0xc0001386e0) Data frame received for 5\nI0502 11:21:58.909362 1694 log.go:172] (0xc0007ac000) (5) Data frame handling\nI0502 11:21:58.909376 1694 log.go:172] (0xc0007ac000) (5) Data frame sent\nmv: can't rename '/tmp/index.html': No such file or directory\nI0502 11:21:58.909515 1694 log.go:172] (0xc0001386e0) Data frame received for 5\nI0502 11:21:58.909551 1694 log.go:172] (0xc0007ac000) (5) Data frame handling\nI0502 11:21:58.911504 1694 log.go:172] (0xc0001386e0) Data frame received for 1\nI0502 11:21:58.911595 1694 log.go:172] (0xc00077d4a0) (1) Data frame handling\nI0502 11:21:58.911640 1694 log.go:172] (0xc00077d4a0) (1) Data frame sent\nI0502 11:21:58.911725 1694 log.go:172] (0xc0001386e0) (0xc00077d4a0) Stream removed, broadcasting: 1\nI0502 11:21:58.911757 1694 log.go:172] (0xc0001386e0) Go away received\nI0502 11:21:58.911942 1694 log.go:172] (0xc0001386e0) (0xc00077d4a0) Stream removed, broadcasting: 1\nI0502 11:21:58.911957 1694 log.go:172] (0xc0001386e0) (0xc0006de000) Stream removed, broadcasting: 3\nI0502 11:21:58.911965 1694 log.go:172] (0xc0001386e0) (0xc0007ac000) Stream removed, broadcasting: 5\n" May 2 11:21:58.915: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" May 2 11:21:58.915: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' May 2 11:21:58.939: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false May 2 11:22:08.944: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true May 2 11:22:08.944: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true May 2 11:22:08.944: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Scale down will not halt with unhealthy stateful pod May 2 11:22:08.947: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-745sk ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' May 2 11:22:09.162: INFO: stderr: "I0502 11:22:09.062879 1716 log.go:172] (0xc000138840) (0xc0007c4640) Create stream\nI0502 11:22:09.062929 1716 log.go:172] (0xc000138840) (0xc0007c4640) Stream added, broadcasting: 1\nI0502 11:22:09.065277 1716 log.go:172] (0xc000138840) Reply frame received for 1\nI0502 11:22:09.065313 1716 log.go:172] (0xc000138840) (0xc0006d4c80) Create stream\nI0502 11:22:09.065321 1716 log.go:172] (0xc000138840) (0xc0006d4c80) Stream added, broadcasting: 3\nI0502 11:22:09.065953 1716 log.go:172] (0xc000138840) Reply frame received for 3\nI0502 11:22:09.065990 1716 log.go:172] (0xc000138840) (0xc0006d4dc0) Create stream\nI0502 11:22:09.066000 1716 log.go:172] (0xc000138840) (0xc0006d4dc0) Stream added, broadcasting: 5\nI0502 11:22:09.066551 1716 log.go:172] (0xc000138840) Reply frame received for 5\nI0502 11:22:09.158802 1716 log.go:172] (0xc000138840) Data frame received for 3\nI0502 11:22:09.158842 1716 log.go:172] (0xc0006d4c80) (3) Data frame handling\nI0502 11:22:09.158854 1716 log.go:172] (0xc0006d4c80) (3) Data frame sent\nI0502 11:22:09.158860 1716 log.go:172] (0xc000138840) Data frame received for 3\nI0502 11:22:09.158865 1716 log.go:172] (0xc0006d4c80) (3) Data frame handling\nI0502 11:22:09.158891 1716 log.go:172] (0xc000138840) Data frame received for 5\nI0502 11:22:09.158898 1716 log.go:172] (0xc0006d4dc0) (5) Data frame handling\nI0502 11:22:09.159867 1716 log.go:172] (0xc000138840) Data frame received for 1\nI0502 11:22:09.159881 1716 log.go:172] (0xc0007c4640) (1) Data frame handling\nI0502 11:22:09.159899 1716 log.go:172] (0xc0007c4640) (1) Data frame sent\nI0502 11:22:09.159914 1716 log.go:172] (0xc000138840) (0xc0007c4640) Stream removed, broadcasting: 1\nI0502 11:22:09.159931 1716 log.go:172] (0xc000138840) Go away received\nI0502 11:22:09.160053 1716 log.go:172] (0xc000138840) (0xc0007c4640) Stream removed, broadcasting: 1\nI0502 11:22:09.160070 1716 log.go:172] (0xc000138840) (0xc0006d4c80) Stream removed, broadcasting: 3\nI0502 11:22:09.160082 1716 log.go:172] (0xc000138840) (0xc0006d4dc0) Stream removed, broadcasting: 5\n" May 2 11:22:09.162: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" May 2 11:22:09.162: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' May 2 11:22:09.162: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-745sk ss-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' May 2 11:22:09.446: INFO: stderr: "I0502 11:22:09.285635 1739 log.go:172] (0xc000168840) (0xc00051b2c0) Create stream\nI0502 11:22:09.285684 1739 log.go:172] (0xc000168840) (0xc00051b2c0) Stream added, broadcasting: 1\nI0502 11:22:09.298576 1739 log.go:172] (0xc000168840) Reply frame received for 1\nI0502 11:22:09.298636 1739 log.go:172] (0xc000168840) (0xc000526000) Create stream\nI0502 11:22:09.298651 1739 log.go:172] (0xc000168840) (0xc000526000) Stream added, broadcasting: 3\nI0502 11:22:09.299447 1739 log.go:172] (0xc000168840) Reply frame received for 3\nI0502 11:22:09.299494 1739 log.go:172] (0xc000168840) (0xc000784000) Create stream\nI0502 11:22:09.299514 1739 log.go:172] (0xc000168840) (0xc000784000) Stream added, broadcasting: 5\nI0502 11:22:09.300094 1739 log.go:172] (0xc000168840) Reply frame received for 5\nI0502 11:22:09.440261 1739 log.go:172] (0xc000168840) Data frame received for 3\nI0502 11:22:09.440324 1739 log.go:172] (0xc000526000) (3) Data frame handling\nI0502 11:22:09.440353 1739 log.go:172] (0xc000526000) (3) Data frame sent\nI0502 11:22:09.440489 1739 log.go:172] (0xc000168840) Data frame received for 5\nI0502 11:22:09.440527 1739 log.go:172] (0xc000784000) (5) Data frame handling\nI0502 11:22:09.440551 1739 log.go:172] (0xc000168840) Data frame received for 3\nI0502 11:22:09.440563 1739 log.go:172] (0xc000526000) (3) Data frame handling\nI0502 11:22:09.442151 1739 log.go:172] (0xc000168840) Data frame received for 1\nI0502 11:22:09.442161 1739 log.go:172] (0xc00051b2c0) (1) Data frame handling\nI0502 11:22:09.442171 1739 log.go:172] (0xc00051b2c0) (1) Data frame sent\nI0502 11:22:09.442390 1739 log.go:172] (0xc000168840) (0xc00051b2c0) Stream removed, broadcasting: 1\nI0502 11:22:09.442508 1739 log.go:172] (0xc000168840) Go away received\nI0502 11:22:09.442608 1739 log.go:172] (0xc000168840) (0xc00051b2c0) Stream removed, broadcasting: 1\nI0502 11:22:09.442640 1739 log.go:172] (0xc000168840) (0xc000526000) Stream removed, broadcasting: 3\nI0502 11:22:09.442659 1739 log.go:172] (0xc000168840) (0xc000784000) Stream removed, broadcasting: 5\n" May 2 11:22:09.447: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" May 2 11:22:09.447: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' May 2 11:22:09.447: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-745sk ss-2 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' May 2 11:22:09.655: INFO: stderr: "I0502 11:22:09.562665 1761 log.go:172] (0xc0008502c0) (0xc000752640) Create stream\nI0502 11:22:09.562735 1761 log.go:172] (0xc0008502c0) (0xc000752640) Stream added, broadcasting: 1\nI0502 11:22:09.565306 1761 log.go:172] (0xc0008502c0) Reply frame received for 1\nI0502 11:22:09.565356 1761 log.go:172] (0xc0008502c0) (0xc0007526e0) Create stream\nI0502 11:22:09.565381 1761 log.go:172] (0xc0008502c0) (0xc0007526e0) Stream added, broadcasting: 3\nI0502 11:22:09.566242 1761 log.go:172] (0xc0008502c0) Reply frame received for 3\nI0502 11:22:09.566276 1761 log.go:172] (0xc0008502c0) (0xc0005b2be0) Create stream\nI0502 11:22:09.566288 1761 log.go:172] (0xc0008502c0) (0xc0005b2be0) Stream added, broadcasting: 5\nI0502 11:22:09.567148 1761 log.go:172] (0xc0008502c0) Reply frame received for 5\nI0502 11:22:09.649764 1761 log.go:172] (0xc0008502c0) Data frame received for 3\nI0502 11:22:09.649796 1761 log.go:172] (0xc0007526e0) (3) Data frame handling\nI0502 11:22:09.649812 1761 log.go:172] (0xc0007526e0) (3) Data frame sent\nI0502 11:22:09.650113 1761 log.go:172] (0xc0008502c0) Data frame received for 3\nI0502 11:22:09.650152 1761 log.go:172] (0xc0007526e0) (3) Data frame handling\nI0502 11:22:09.650195 1761 log.go:172] (0xc0008502c0) Data frame received for 5\nI0502 11:22:09.650236 1761 log.go:172] (0xc0005b2be0) (5) Data frame handling\nI0502 11:22:09.651391 1761 log.go:172] (0xc0008502c0) Data frame received for 1\nI0502 11:22:09.651415 1761 log.go:172] (0xc000752640) (1) Data frame handling\nI0502 11:22:09.651440 1761 log.go:172] (0xc000752640) (1) Data frame sent\nI0502 11:22:09.651616 1761 log.go:172] (0xc0008502c0) (0xc000752640) Stream removed, broadcasting: 1\nI0502 11:22:09.651718 1761 log.go:172] (0xc0008502c0) Go away received\nI0502 11:22:09.651754 1761 log.go:172] (0xc0008502c0) (0xc000752640) Stream removed, broadcasting: 1\nI0502 11:22:09.651767 1761 log.go:172] (0xc0008502c0) (0xc0007526e0) Stream removed, broadcasting: 3\nI0502 11:22:09.651777 1761 log.go:172] (0xc0008502c0) (0xc0005b2be0) Stream removed, broadcasting: 5\n" May 2 11:22:09.655: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" May 2 11:22:09.655: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' May 2 11:22:09.655: INFO: Waiting for statefulset status.replicas updated to 0 May 2 11:22:09.658: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 3 May 2 11:22:19.667: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false May 2 11:22:19.667: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false May 2 11:22:19.667: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false May 2 11:22:19.855: INFO: POD NODE PHASE GRACE CONDITIONS May 2 11:22:19.855: INFO: ss-0 hunter-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-02 11:21:27 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-02 11:22:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-02 11:22:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-02 11:21:27 +0000 UTC }] May 2 11:22:19.855: INFO: ss-1 hunter-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-02 11:21:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-02 11:22:10 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-02 11:22:10 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-02 11:21:47 +0000 UTC }] May 2 11:22:19.855: INFO: ss-2 hunter-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-02 11:21:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-02 11:22:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-02 11:22:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-02 11:21:47 +0000 UTC }] May 2 11:22:19.856: INFO: May 2 11:22:19.856: INFO: StatefulSet ss has not reached scale 0, at 3 May 2 11:22:20.860: INFO: POD NODE PHASE GRACE CONDITIONS May 2 11:22:20.860: INFO: ss-0 hunter-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-02 11:21:27 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-02 11:22:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-02 11:22:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-02 11:21:27 +0000 UTC }] May 2 11:22:20.860: INFO: ss-1 hunter-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-02 11:21:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-02 11:22:10 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-02 11:22:10 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-02 11:21:47 +0000 UTC }] May 2 11:22:20.860: INFO: ss-2 hunter-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-02 11:21:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-02 11:22:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-02 11:22:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-02 11:21:47 +0000 UTC }] May 2 11:22:20.860: INFO: May 2 11:22:20.860: INFO: StatefulSet ss has not reached scale 0, at 3 May 2 11:22:21.873: INFO: POD NODE PHASE GRACE CONDITIONS May 2 11:22:21.873: INFO: ss-0 hunter-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-02 11:21:27 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-02 11:22:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-02 11:22:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-02 11:21:27 +0000 UTC }] May 2 11:22:21.873: INFO: ss-1 hunter-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-02 11:21:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-02 11:22:10 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-02 11:22:10 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-02 11:21:47 +0000 UTC }] May 2 11:22:21.873: INFO: ss-2 hunter-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-02 11:21:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-02 11:22:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-02 11:22:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-02 11:21:47 +0000 UTC }] May 2 11:22:21.873: INFO: May 2 11:22:21.873: INFO: StatefulSet ss has not reached scale 0, at 3 May 2 11:22:22.878: INFO: POD NODE PHASE GRACE CONDITIONS May 2 11:22:22.878: INFO: ss-0 hunter-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-02 11:21:27 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-02 11:22:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-02 11:22:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-02 11:21:27 +0000 UTC }] May 2 11:22:22.878: INFO: ss-1 hunter-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-02 11:21:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-02 11:22:10 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-02 11:22:10 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-02 11:21:47 +0000 UTC }] May 2 11:22:22.878: INFO: ss-2 hunter-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-02 11:21:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-02 11:22:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-02 11:22:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-02 11:21:47 +0000 UTC }] May 2 11:22:22.879: INFO: May 2 11:22:22.879: INFO: StatefulSet ss has not reached scale 0, at 3 May 2 11:22:23.920: INFO: POD NODE PHASE GRACE CONDITIONS May 2 11:22:23.920: INFO: ss-0 hunter-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-02 11:21:27 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-02 11:22:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-02 11:22:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-02 11:21:27 +0000 UTC }] May 2 11:22:23.921: INFO: ss-1 hunter-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-02 11:21:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-02 11:22:10 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-02 11:22:10 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-02 11:21:47 +0000 UTC }] May 2 11:22:23.921: INFO: ss-2 hunter-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-02 11:21:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-02 11:22:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-02 11:22:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-02 11:21:47 +0000 UTC }] May 2 11:22:23.921: INFO: May 2 11:22:23.921: INFO: StatefulSet ss has not reached scale 0, at 3 May 2 11:22:24.925: INFO: POD NODE PHASE GRACE CONDITIONS May 2 11:22:24.925: INFO: ss-0 hunter-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-02 11:21:27 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-02 11:22:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-02 11:22:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-02 11:21:27 +0000 UTC }] May 2 11:22:24.925: INFO: ss-1 hunter-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-02 11:21:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-02 11:22:10 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-02 11:22:10 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-02 11:21:47 +0000 UTC }] May 2 11:22:24.925: INFO: ss-2 hunter-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-02 11:21:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-02 11:22:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-02 11:22:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-02 11:21:47 +0000 UTC }] May 2 11:22:24.925: INFO: May 2 11:22:24.925: INFO: StatefulSet ss has not reached scale 0, at 3 May 2 11:22:25.935: INFO: POD NODE PHASE GRACE CONDITIONS May 2 11:22:25.935: INFO: ss-0 hunter-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-02 11:21:27 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-02 11:22:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-02 11:22:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-02 11:21:27 +0000 UTC }] May 2 11:22:25.935: INFO: ss-1 hunter-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-02 11:21:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-02 11:22:10 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-02 11:22:10 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-02 11:21:47 +0000 UTC }] May 2 11:22:25.935: INFO: ss-2 hunter-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-02 11:21:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-02 11:22:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-02 11:22:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-02 11:21:47 +0000 UTC }] May 2 11:22:25.935: INFO: May 2 11:22:25.935: INFO: StatefulSet ss has not reached scale 0, at 3 May 2 11:22:26.940: INFO: POD NODE PHASE GRACE CONDITIONS May 2 11:22:26.940: INFO: ss-0 hunter-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-02 11:21:27 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-02 11:22:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-02 11:22:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-02 11:21:27 +0000 UTC }] May 2 11:22:26.940: INFO: ss-1 hunter-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-02 11:21:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-02 11:22:10 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-02 11:22:10 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-02 11:21:47 +0000 UTC }] May 2 11:22:26.940: INFO: ss-2 hunter-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-02 11:21:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-02 11:22:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-02 11:22:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-02 11:21:47 +0000 UTC }] May 2 11:22:26.940: INFO: May 2 11:22:26.940: INFO: StatefulSet ss has not reached scale 0, at 3 May 2 11:22:27.945: INFO: POD NODE PHASE GRACE CONDITIONS May 2 11:22:27.945: INFO: ss-0 hunter-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-02 11:21:27 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-02 11:22:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-02 11:22:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-02 11:21:27 +0000 UTC }] May 2 11:22:27.945: INFO: ss-1 hunter-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-02 11:21:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-02 11:22:10 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-02 11:22:10 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-02 11:21:47 +0000 UTC }] May 2 11:22:27.945: INFO: ss-2 hunter-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-02 11:21:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-02 11:22:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-02 11:22:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-02 11:21:47 +0000 UTC }] May 2 11:22:27.945: INFO: May 2 11:22:27.945: INFO: StatefulSet ss has not reached scale 0, at 3 May 2 11:22:28.951: INFO: POD NODE PHASE GRACE CONDITIONS May 2 11:22:28.951: INFO: ss-0 hunter-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-02 11:21:27 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-02 11:22:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-02 11:22:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-02 11:21:27 +0000 UTC }] May 2 11:22:28.951: INFO: ss-1 hunter-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-02 11:21:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-02 11:22:10 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-02 11:22:10 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-02 11:21:47 +0000 UTC }] May 2 11:22:28.951: INFO: ss-2 hunter-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-02 11:21:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-02 11:22:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-02 11:22:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-02 11:21:47 +0000 UTC }] May 2 11:22:28.951: INFO: May 2 11:22:28.951: INFO: StatefulSet ss has not reached scale 0, at 3 STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacee2e-tests-statefulset-745sk May 2 11:22:29.956: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-745sk ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 2 11:22:30.085: INFO: rc: 1 May 2 11:22:30.085: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-745sk ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] error: unable to upgrade connection: container not found ("nginx") [] 0xc00092e360 exit status 1 true [0xc00176ea70 0xc00176ea88 0xc00176eaa0] [0xc00176ea70 0xc00176ea88 0xc00176eaa0] [0xc00176ea80 0xc00176ea98] [0x935700 0x935700] 0xc001454fc0 }: Command stdout: stderr: error: unable to upgrade connection: container not found ("nginx") error: exit status 1 May 2 11:22:40.085: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-745sk ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 2 11:22:40.178: INFO: rc: 1 May 2 11:22:40.178: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-745sk ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0024899b0 exit status 1 true [0xc000d350a0 0xc000d350b8 0xc000d350d0] [0xc000d350a0 0xc000d350b8 0xc000d350d0] [0xc000d350b0 0xc000d350c8] [0x935700 0x935700] 0xc0019bd9e0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 2 11:22:50.178: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-745sk ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 2 11:22:50.269: INFO: rc: 1 May 2 11:22:50.269: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-745sk ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001fd4720 exit status 1 true [0xc00160e750 0xc00160e768 0xc00160e780] [0xc00160e750 0xc00160e768 0xc00160e780] [0xc00160e760 0xc00160e778] [0x935700 0x935700] 0xc0016db800 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 2 11:23:00.270: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-745sk ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 2 11:23:00.350: INFO: rc: 1 May 2 11:23:00.350: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-745sk ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001fd4900 exit status 1 true [0xc00160e788 0xc00160e7a0 0xc00160e7b8] [0xc00160e788 0xc00160e7a0 0xc00160e7b8] [0xc00160e798 0xc00160e7b0] [0x935700 0x935700] 0xc001a9c300 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 2 11:23:10.351: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-745sk ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 2 11:23:10.437: INFO: rc: 1 May 2 11:23:10.438: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-745sk ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001fd4a50 exit status 1 true [0xc00160e7c0 0xc00160e7d8 0xc00160e7f0] [0xc00160e7c0 0xc00160e7d8 0xc00160e7f0] [0xc00160e7d0 0xc00160e7e8] [0x935700 0x935700] 0xc001a9c780 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 2 11:23:20.438: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-745sk ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 2 11:23:20.586: INFO: rc: 1 May 2 11:23:20.586: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-745sk ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001982120 exit status 1 true [0xc0004f8bf0 0xc0004f8c30 0xc0004f8c90] [0xc0004f8bf0 0xc0004f8c30 0xc0004f8c90] [0xc0004f8c20 0xc0004f8c70] [0x935700 0x935700] 0xc0016da5a0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 2 11:23:30.587: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-745sk ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 2 11:23:30.684: INFO: rc: 1 May 2 11:23:30.684: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-745sk ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0023bc180 exit status 1 true [0xc00160e000 0xc00160e018 0xc00160e030] [0xc00160e000 0xc00160e018 0xc00160e030] [0xc00160e010 0xc00160e028] [0x935700 0x935700] 0xc0015e6300 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 2 11:23:40.684: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-745sk ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 2 11:23:40.777: INFO: rc: 1 May 2 11:23:40.777: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-745sk ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001982270 exit status 1 true [0xc0004f8ca8 0xc0004f8d08 0xc0004f8d40] [0xc0004f8ca8 0xc0004f8d08 0xc0004f8d40] [0xc0004f8cd8 0xc0004f8d38] [0x935700 0x935700] 0xc0016daa20 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 2 11:23:50.777: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-745sk ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 2 11:23:50.865: INFO: rc: 1 May 2 11:23:50.865: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-745sk ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0023bc300 exit status 1 true [0xc00160e038 0xc00160e050 0xc00160e068] [0xc00160e038 0xc00160e050 0xc00160e068] [0xc00160e048 0xc00160e060] [0x935700 0x935700] 0xc0015e75c0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 2 11:24:00.865: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-745sk ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 2 11:24:00.955: INFO: rc: 1 May 2 11:24:00.955: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-745sk ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0023bc420 exit status 1 true [0xc00160e070 0xc00160e088 0xc00160e0a0] [0xc00160e070 0xc00160e088 0xc00160e0a0] [0xc00160e080 0xc00160e098] [0x935700 0x935700] 0xc0015e7860 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 2 11:24:10.956: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-745sk ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 2 11:24:11.064: INFO: rc: 1 May 2 11:24:11.064: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-745sk ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0023bc630 exit status 1 true [0xc00160e0a8 0xc00160e0c0 0xc00160e0d8] [0xc00160e0a8 0xc00160e0c0 0xc00160e0d8] [0xc00160e0b8 0xc00160e0d0] [0x935700 0x935700] 0xc0015e7b00 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 2 11:24:21.065: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-745sk ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 2 11:24:21.159: INFO: rc: 1 May 2 11:24:21.159: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-745sk ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001a38570 exit status 1 true [0xc0016e6000 0xc0016e6018 0xc0016e6040] [0xc0016e6000 0xc0016e6018 0xc0016e6040] [0xc0016e6010 0xc0016e6038] [0x935700 0x935700] 0xc001c9c420 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 2 11:24:31.160: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-745sk ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 2 11:24:31.245: INFO: rc: 1 May 2 11:24:31.245: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-745sk ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0023bc750 exit status 1 true [0xc00160e0e0 0xc00160e0f8 0xc00160e110] [0xc00160e0e0 0xc00160e0f8 0xc00160e110] [0xc00160e0f0 0xc00160e108] [0x935700 0x935700] 0xc0015e7da0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 2 11:24:41.245: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-745sk ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 2 11:24:41.337: INFO: rc: 1 May 2 11:24:41.337: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-745sk ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc000fb0210 exit status 1 true [0xc00081c000 0xc00081c018 0xc00081c030] [0xc00081c000 0xc00081c018 0xc00081c030] [0xc00081c010 0xc00081c028] [0x935700 0x935700] 0xc00192c840 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 2 11:24:51.337: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-745sk ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 2 11:24:51.431: INFO: rc: 1 May 2 11:24:51.431: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-745sk ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc000fb03c0 exit status 1 true [0xc00081c038 0xc00081c050 0xc00081c068] [0xc00081c038 0xc00081c050 0xc00081c068] [0xc00081c048 0xc00081c060] [0x935700 0x935700] 0xc00192d2c0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 2 11:25:01.432: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-745sk ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 2 11:25:01.529: INFO: rc: 1 May 2 11:25:01.529: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-745sk ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001a387e0 exit status 1 true [0xc0016e6048 0xc0016e6060 0xc0016e6078] [0xc0016e6048 0xc0016e6060 0xc0016e6078] [0xc0016e6058 0xc0016e6070] [0x935700 0x935700] 0xc001c9ccc0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 2 11:25:11.529: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-745sk ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 2 11:25:11.629: INFO: rc: 1 May 2 11:25:11.629: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-745sk ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc000fb05d0 exit status 1 true [0xc00081c070 0xc00081c088 0xc00081c0a0] [0xc00081c070 0xc00081c088 0xc00081c0a0] [0xc00081c080 0xc00081c098] [0x935700 0x935700] 0xc00192dc20 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 2 11:25:21.630: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-745sk ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 2 11:25:21.730: INFO: rc: 1 May 2 11:25:21.730: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-745sk ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0023bc8d0 exit status 1 true [0xc00160e120 0xc00160e138 0xc00160e150] [0xc00160e120 0xc00160e138 0xc00160e150] [0xc00160e130 0xc00160e148] [0x935700 0x935700] 0xc0016bb380 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 2 11:25:31.730: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-745sk ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 2 11:25:31.879: INFO: rc: 1 May 2 11:25:31.879: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-745sk ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001a384b0 exit status 1 true [0xc000176128 0xc0016e6010 0xc0016e6038] [0xc000176128 0xc0016e6010 0xc0016e6038] [0xc0016e6008 0xc0016e6020] [0x935700 0x935700] 0xc0015e6300 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 2 11:25:41.879: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-745sk ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 2 11:25:41.974: INFO: rc: 1 May 2 11:25:41.975: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-745sk ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001982150 exit status 1 true [0xc0004f8bf0 0xc0004f8c30 0xc0004f8c90] [0xc0004f8bf0 0xc0004f8c30 0xc0004f8c90] [0xc0004f8c20 0xc0004f8c70] [0x935700 0x935700] 0xc001c9c420 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 2 11:25:51.975: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-745sk ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 2 11:25:52.071: INFO: rc: 1 May 2 11:25:52.071: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-745sk ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001a38780 exit status 1 true [0xc0016e6040 0xc0016e6058 0xc0016e6070] [0xc0016e6040 0xc0016e6058 0xc0016e6070] [0xc0016e6050 0xc0016e6068] [0x935700 0x935700] 0xc0015e75c0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 2 11:26:02.071: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-745sk ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 2 11:26:02.166: INFO: rc: 1 May 2 11:26:02.166: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-745sk ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0019822a0 exit status 1 true [0xc0004f8ca8 0xc0004f8d08 0xc0004f8d40] [0xc0004f8ca8 0xc0004f8d08 0xc0004f8d40] [0xc0004f8cd8 0xc0004f8d38] [0x935700 0x935700] 0xc001c9ccc0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 2 11:26:12.166: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-745sk ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 2 11:26:12.253: INFO: rc: 1 May 2 11:26:12.253: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-745sk ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0023bc1e0 exit status 1 true [0xc00081c000 0xc00081c018 0xc00081c030] [0xc00081c000 0xc00081c018 0xc00081c030] [0xc00081c010 0xc00081c028] [0x935700 0x935700] 0xc0016da5a0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 2 11:26:22.254: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-745sk ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 2 11:26:22.348: INFO: rc: 1 May 2 11:26:22.348: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-745sk ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001a38900 exit status 1 true [0xc0016e6078 0xc0016e6090 0xc0016e60a8] [0xc0016e6078 0xc0016e6090 0xc0016e60a8] [0xc0016e6088 0xc0016e60a0] [0x935700 0x935700] 0xc0015e7860 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 2 11:26:32.348: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-745sk ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 2 11:26:32.426: INFO: rc: 1 May 2 11:26:32.426: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-745sk ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001982420 exit status 1 true [0xc0004f8d68 0xc0004f8da0 0xc0004f8de8] [0xc0004f8d68 0xc0004f8da0 0xc0004f8de8] [0xc0004f8d88 0xc0004f8dd0] [0x935700 0x935700] 0xc001c9d320 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 2 11:26:42.426: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-745sk ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 2 11:26:42.525: INFO: rc: 1 May 2 11:26:42.525: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-745sk ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc000fb02a0 exit status 1 true [0xc00160e000 0xc00160e018 0xc00160e030] [0xc00160e000 0xc00160e018 0xc00160e030] [0xc00160e010 0xc00160e028] [0x935700 0x935700] 0xc00192c840 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 2 11:26:52.525: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-745sk ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 2 11:26:52.623: INFO: rc: 1 May 2 11:26:52.623: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-745sk ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001a38a50 exit status 1 true [0xc0016e60b0 0xc0016e60c8 0xc0016e60e0] [0xc0016e60b0 0xc0016e60c8 0xc0016e60e0] [0xc0016e60c0 0xc0016e60d8] [0x935700 0x935700] 0xc0015e7b00 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 2 11:27:02.623: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-745sk ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 2 11:27:02.709: INFO: rc: 1 May 2 11:27:02.709: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-745sk ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001a38ba0 exit status 1 true [0xc0016e60e8 0xc0016e6100 0xc0016e6118] [0xc0016e60e8 0xc0016e6100 0xc0016e6118] [0xc0016e60f8 0xc0016e6110] [0x935700 0x935700] 0xc0015e7da0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 2 11:27:12.709: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-745sk ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 2 11:27:12.796: INFO: rc: 1 May 2 11:27:12.796: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-745sk ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001a38cf0 exit status 1 true [0xc0016e6120 0xc0016e6138 0xc0016e6150] [0xc0016e6120 0xc0016e6138 0xc0016e6150] [0xc0016e6130 0xc0016e6148] [0x935700 0x935700] 0xc00184aa20 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 2 11:27:22.796: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-745sk ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 2 11:27:22.892: INFO: rc: 1 May 2 11:27:22.892: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-745sk ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00131a0f0 exit status 1 true [0xc000d34008 0xc000d34038 0xc000d34050] [0xc000d34008 0xc000d34038 0xc000d34050] [0xc000d34030 0xc000d34048] [0x935700 0x935700] 0xc0017cc2a0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 2 11:27:32.892: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-745sk ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 2 11:27:32.971: INFO: rc: 1 May 2 11:27:32.971: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: May 2 11:27:32.971: INFO: Scaling statefulset ss to 0 May 2 11:27:32.978: INFO: Waiting for statefulset status.replicas updated to 0 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85 May 2 11:27:32.979: INFO: Deleting all statefulset in ns e2e-tests-statefulset-745sk May 2 11:27:32.981: INFO: Scaling statefulset ss to 0 May 2 11:27:32.985: INFO: Waiting for statefulset status.replicas updated to 0 May 2 11:27:32.987: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 2 11:27:33.012: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-statefulset-745sk" for this suite. May 2 11:27:39.286: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 11:27:39.316: INFO: namespace: e2e-tests-statefulset-745sk, resource: bindings, ignored listing per whitelist May 2 11:27:39.341: INFO: namespace e2e-tests-statefulset-745sk deletion completed in 6.194029536s • [SLOW TEST:371.883 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 2 11:27:39.341: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a watch on configmaps STEP: creating a new configmap STEP: modifying the configmap once STEP: closing the watch once it receives two notifications May 2 11:27:39.676: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-9ffk7,SelfLink:/api/v1/namespaces/e2e-tests-watch-9ffk7/configmaps/e2e-watch-test-watch-closed,UID:ee42a6e4-8c67-11ea-99e8-0242ac110002,ResourceVersion:8339127,Generation:0,CreationTimestamp:2020-05-02 11:27:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} May 2 11:27:39.676: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-9ffk7,SelfLink:/api/v1/namespaces/e2e-tests-watch-9ffk7/configmaps/e2e-watch-test-watch-closed,UID:ee42a6e4-8c67-11ea-99e8-0242ac110002,ResourceVersion:8339129,Generation:0,CreationTimestamp:2020-05-02 11:27:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time, while the watch is closed STEP: creating a new watch on configmaps from the last resource version observed by the first watch STEP: deleting the configmap STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed May 2 11:27:39.810: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-9ffk7,SelfLink:/api/v1/namespaces/e2e-tests-watch-9ffk7/configmaps/e2e-watch-test-watch-closed,UID:ee42a6e4-8c67-11ea-99e8-0242ac110002,ResourceVersion:8339130,Generation:0,CreationTimestamp:2020-05-02 11:27:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} May 2 11:27:39.810: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-9ffk7,SelfLink:/api/v1/namespaces/e2e-tests-watch-9ffk7/configmaps/e2e-watch-test-watch-closed,UID:ee42a6e4-8c67-11ea-99e8-0242ac110002,ResourceVersion:8339131,Generation:0,CreationTimestamp:2020-05-02 11:27:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 2 11:27:39.810: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-watch-9ffk7" for this suite. May 2 11:27:45.876: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 11:27:45.890: INFO: namespace: e2e-tests-watch-9ffk7, resource: bindings, ignored listing per whitelist May 2 11:27:45.943: INFO: namespace e2e-tests-watch-9ffk7 deletion completed in 6.093662716s • [SLOW TEST:6.603 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 2 11:27:45.944: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-f23a83c2-8c67-11ea-8045-0242ac110017 STEP: Creating a pod to test consume configMaps May 2 11:27:46.240: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-f23d8ae0-8c67-11ea-8045-0242ac110017" in namespace "e2e-tests-projected-qxkv2" to be "success or failure" May 2 11:27:46.258: INFO: Pod "pod-projected-configmaps-f23d8ae0-8c67-11ea-8045-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 18.44108ms May 2 11:27:48.353: INFO: Pod "pod-projected-configmaps-f23d8ae0-8c67-11ea-8045-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.113814877s May 2 11:27:50.358: INFO: Pod "pod-projected-configmaps-f23d8ae0-8c67-11ea-8045-0242ac110017": Phase="Running", Reason="", readiness=true. Elapsed: 4.118132645s May 2 11:27:52.362: INFO: Pod "pod-projected-configmaps-f23d8ae0-8c67-11ea-8045-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.122154261s STEP: Saw pod success May 2 11:27:52.362: INFO: Pod "pod-projected-configmaps-f23d8ae0-8c67-11ea-8045-0242ac110017" satisfied condition "success or failure" May 2 11:27:52.365: INFO: Trying to get logs from node hunter-worker2 pod pod-projected-configmaps-f23d8ae0-8c67-11ea-8045-0242ac110017 container projected-configmap-volume-test: STEP: delete the pod May 2 11:27:52.417: INFO: Waiting for pod pod-projected-configmaps-f23d8ae0-8c67-11ea-8045-0242ac110017 to disappear May 2 11:27:52.424: INFO: Pod pod-projected-configmaps-f23d8ae0-8c67-11ea-8045-0242ac110017 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 2 11:27:52.424: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-qxkv2" for this suite. May 2 11:27:58.440: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 11:27:58.450: INFO: namespace: e2e-tests-projected-qxkv2, resource: bindings, ignored listing per whitelist May 2 11:27:58.518: INFO: namespace e2e-tests-projected-qxkv2 deletion completed in 6.091143288s • [SLOW TEST:12.574 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 2 11:27:58.519: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74 STEP: Creating service test in namespace e2e-tests-statefulset-pc7sw [It] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a new StatefulSet May 2 11:27:58.700: INFO: Found 0 stateful pods, waiting for 3 May 2 11:28:08.705: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 2 11:28:08.705: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 2 11:28:08.705: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true May 2 11:28:08.716: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pc7sw ss2-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' May 2 11:28:09.004: INFO: stderr: "I0502 11:28:08.858284 2448 log.go:172] (0xc00015c840) (0xc00075e640) Create stream\nI0502 11:28:08.858353 2448 log.go:172] (0xc00015c840) (0xc00075e640) Stream added, broadcasting: 1\nI0502 11:28:08.871637 2448 log.go:172] (0xc00015c840) Reply frame received for 1\nI0502 11:28:08.871692 2448 log.go:172] (0xc00015c840) (0xc000606dc0) Create stream\nI0502 11:28:08.871704 2448 log.go:172] (0xc00015c840) (0xc000606dc0) Stream added, broadcasting: 3\nI0502 11:28:08.872549 2448 log.go:172] (0xc00015c840) Reply frame received for 3\nI0502 11:28:08.872577 2448 log.go:172] (0xc00015c840) (0xc000556000) Create stream\nI0502 11:28:08.872587 2448 log.go:172] (0xc00015c840) (0xc000556000) Stream added, broadcasting: 5\nI0502 11:28:08.873413 2448 log.go:172] (0xc00015c840) Reply frame received for 5\nI0502 11:28:08.996159 2448 log.go:172] (0xc00015c840) Data frame received for 3\nI0502 11:28:08.996230 2448 log.go:172] (0xc000606dc0) (3) Data frame handling\nI0502 11:28:08.996259 2448 log.go:172] (0xc000606dc0) (3) Data frame sent\nI0502 11:28:08.996280 2448 log.go:172] (0xc00015c840) Data frame received for 3\nI0502 11:28:08.996305 2448 log.go:172] (0xc000606dc0) (3) Data frame handling\nI0502 11:28:08.996333 2448 log.go:172] (0xc00015c840) Data frame received for 5\nI0502 11:28:08.996375 2448 log.go:172] (0xc000556000) (5) Data frame handling\nI0502 11:28:08.999709 2448 log.go:172] (0xc00015c840) Data frame received for 1\nI0502 11:28:08.999734 2448 log.go:172] (0xc00075e640) (1) Data frame handling\nI0502 11:28:08.999749 2448 log.go:172] (0xc00075e640) (1) Data frame sent\nI0502 11:28:08.999766 2448 log.go:172] (0xc00015c840) (0xc00075e640) Stream removed, broadcasting: 1\nI0502 11:28:08.999797 2448 log.go:172] (0xc00015c840) Go away received\nI0502 11:28:09.000070 2448 log.go:172] (0xc00015c840) (0xc00075e640) Stream removed, broadcasting: 1\nI0502 11:28:09.000104 2448 log.go:172] (0xc00015c840) (0xc000606dc0) Stream removed, broadcasting: 3\nI0502 11:28:09.000122 2448 log.go:172] (0xc00015c840) (0xc000556000) Stream removed, broadcasting: 5\n" May 2 11:28:09.004: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" May 2 11:28:09.004: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' STEP: Updating StatefulSet template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine May 2 11:28:19.038: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Updating Pods in reverse ordinal order May 2 11:28:29.060: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pc7sw ss2-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 2 11:28:29.284: INFO: stderr: "I0502 11:28:29.186343 2472 log.go:172] (0xc000138630) (0xc00072e640) Create stream\nI0502 11:28:29.186407 2472 log.go:172] (0xc000138630) (0xc00072e640) Stream added, broadcasting: 1\nI0502 11:28:29.188866 2472 log.go:172] (0xc000138630) Reply frame received for 1\nI0502 11:28:29.188914 2472 log.go:172] (0xc000138630) (0xc000418c80) Create stream\nI0502 11:28:29.188927 2472 log.go:172] (0xc000138630) (0xc000418c80) Stream added, broadcasting: 3\nI0502 11:28:29.190143 2472 log.go:172] (0xc000138630) Reply frame received for 3\nI0502 11:28:29.190186 2472 log.go:172] (0xc000138630) (0xc00072e6e0) Create stream\nI0502 11:28:29.190200 2472 log.go:172] (0xc000138630) (0xc00072e6e0) Stream added, broadcasting: 5\nI0502 11:28:29.191218 2472 log.go:172] (0xc000138630) Reply frame received for 5\nI0502 11:28:29.277878 2472 log.go:172] (0xc000138630) Data frame received for 3\nI0502 11:28:29.277906 2472 log.go:172] (0xc000418c80) (3) Data frame handling\nI0502 11:28:29.277914 2472 log.go:172] (0xc000418c80) (3) Data frame sent\nI0502 11:28:29.277920 2472 log.go:172] (0xc000138630) Data frame received for 3\nI0502 11:28:29.277927 2472 log.go:172] (0xc000418c80) (3) Data frame handling\nI0502 11:28:29.277951 2472 log.go:172] (0xc000138630) Data frame received for 5\nI0502 11:28:29.277957 2472 log.go:172] (0xc00072e6e0) (5) Data frame handling\nI0502 11:28:29.280300 2472 log.go:172] (0xc000138630) Data frame received for 1\nI0502 11:28:29.280321 2472 log.go:172] (0xc00072e640) (1) Data frame handling\nI0502 11:28:29.280348 2472 log.go:172] (0xc00072e640) (1) Data frame sent\nI0502 11:28:29.280476 2472 log.go:172] (0xc000138630) (0xc00072e640) Stream removed, broadcasting: 1\nI0502 11:28:29.280647 2472 log.go:172] (0xc000138630) Go away received\nI0502 11:28:29.280730 2472 log.go:172] (0xc000138630) (0xc00072e640) Stream removed, broadcasting: 1\nI0502 11:28:29.280758 2472 log.go:172] (0xc000138630) (0xc000418c80) Stream removed, broadcasting: 3\nI0502 11:28:29.280781 2472 log.go:172] (0xc000138630) (0xc00072e6e0) Stream removed, broadcasting: 5\n" May 2 11:28:29.285: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" May 2 11:28:29.285: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' May 2 11:28:39.303: INFO: Waiting for StatefulSet e2e-tests-statefulset-pc7sw/ss2 to complete update May 2 11:28:39.303: INFO: Waiting for Pod e2e-tests-statefulset-pc7sw/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c May 2 11:28:39.303: INFO: Waiting for Pod e2e-tests-statefulset-pc7sw/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c May 2 11:28:39.303: INFO: Waiting for Pod e2e-tests-statefulset-pc7sw/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c May 2 11:28:49.310: INFO: Waiting for StatefulSet e2e-tests-statefulset-pc7sw/ss2 to complete update May 2 11:28:49.310: INFO: Waiting for Pod e2e-tests-statefulset-pc7sw/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c May 2 11:28:59.311: INFO: Waiting for StatefulSet e2e-tests-statefulset-pc7sw/ss2 to complete update May 2 11:28:59.311: INFO: Waiting for Pod e2e-tests-statefulset-pc7sw/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c STEP: Rolling back to a previous revision May 2 11:29:09.310: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pc7sw ss2-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' May 2 11:29:09.543: INFO: stderr: "I0502 11:29:09.429057 2494 log.go:172] (0xc0008402c0) (0xc000766640) Create stream\nI0502 11:29:09.429336 2494 log.go:172] (0xc0008402c0) (0xc000766640) Stream added, broadcasting: 1\nI0502 11:29:09.431352 2494 log.go:172] (0xc0008402c0) Reply frame received for 1\nI0502 11:29:09.431411 2494 log.go:172] (0xc0008402c0) (0xc0005ccc80) Create stream\nI0502 11:29:09.431435 2494 log.go:172] (0xc0008402c0) (0xc0005ccc80) Stream added, broadcasting: 3\nI0502 11:29:09.432260 2494 log.go:172] (0xc0008402c0) Reply frame received for 3\nI0502 11:29:09.432312 2494 log.go:172] (0xc0008402c0) (0xc00057a000) Create stream\nI0502 11:29:09.432327 2494 log.go:172] (0xc0008402c0) (0xc00057a000) Stream added, broadcasting: 5\nI0502 11:29:09.433096 2494 log.go:172] (0xc0008402c0) Reply frame received for 5\nI0502 11:29:09.536264 2494 log.go:172] (0xc0008402c0) Data frame received for 3\nI0502 11:29:09.536310 2494 log.go:172] (0xc0005ccc80) (3) Data frame handling\nI0502 11:29:09.536341 2494 log.go:172] (0xc0005ccc80) (3) Data frame sent\nI0502 11:29:09.536360 2494 log.go:172] (0xc0008402c0) Data frame received for 3\nI0502 11:29:09.536378 2494 log.go:172] (0xc0005ccc80) (3) Data frame handling\nI0502 11:29:09.536513 2494 log.go:172] (0xc0008402c0) Data frame received for 5\nI0502 11:29:09.536549 2494 log.go:172] (0xc00057a000) (5) Data frame handling\nI0502 11:29:09.538565 2494 log.go:172] (0xc0008402c0) Data frame received for 1\nI0502 11:29:09.538580 2494 log.go:172] (0xc000766640) (1) Data frame handling\nI0502 11:29:09.538589 2494 log.go:172] (0xc000766640) (1) Data frame sent\nI0502 11:29:09.538813 2494 log.go:172] (0xc0008402c0) (0xc000766640) Stream removed, broadcasting: 1\nI0502 11:29:09.539016 2494 log.go:172] (0xc0008402c0) (0xc000766640) Stream removed, broadcasting: 1\nI0502 11:29:09.539038 2494 log.go:172] (0xc0008402c0) (0xc0005ccc80) Stream removed, broadcasting: 3\nI0502 11:29:09.539058 2494 log.go:172] (0xc0008402c0) Go away received\nI0502 11:29:09.539107 2494 log.go:172] (0xc0008402c0) (0xc00057a000) Stream removed, broadcasting: 5\n" May 2 11:29:09.543: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" May 2 11:29:09.543: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' May 2 11:29:19.575: INFO: Updating stateful set ss2 STEP: Rolling back update in reverse ordinal order May 2 11:29:29.600: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pc7sw ss2-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 2 11:29:29.816: INFO: stderr: "I0502 11:29:29.719948 2517 log.go:172] (0xc0001386e0) (0xc000641360) Create stream\nI0502 11:29:29.720007 2517 log.go:172] (0xc0001386e0) (0xc000641360) Stream added, broadcasting: 1\nI0502 11:29:29.722120 2517 log.go:172] (0xc0001386e0) Reply frame received for 1\nI0502 11:29:29.722175 2517 log.go:172] (0xc0001386e0) (0xc0003e8000) Create stream\nI0502 11:29:29.722193 2517 log.go:172] (0xc0001386e0) (0xc0003e8000) Stream added, broadcasting: 3\nI0502 11:29:29.722878 2517 log.go:172] (0xc0001386e0) Reply frame received for 3\nI0502 11:29:29.722917 2517 log.go:172] (0xc0001386e0) (0xc000596000) Create stream\nI0502 11:29:29.722930 2517 log.go:172] (0xc0001386e0) (0xc000596000) Stream added, broadcasting: 5\nI0502 11:29:29.723484 2517 log.go:172] (0xc0001386e0) Reply frame received for 5\nI0502 11:29:29.810097 2517 log.go:172] (0xc0001386e0) Data frame received for 5\nI0502 11:29:29.810127 2517 log.go:172] (0xc000596000) (5) Data frame handling\nI0502 11:29:29.810188 2517 log.go:172] (0xc0001386e0) Data frame received for 3\nI0502 11:29:29.810221 2517 log.go:172] (0xc0003e8000) (3) Data frame handling\nI0502 11:29:29.810235 2517 log.go:172] (0xc0003e8000) (3) Data frame sent\nI0502 11:29:29.810247 2517 log.go:172] (0xc0001386e0) Data frame received for 3\nI0502 11:29:29.810257 2517 log.go:172] (0xc0003e8000) (3) Data frame handling\nI0502 11:29:29.811823 2517 log.go:172] (0xc0001386e0) Data frame received for 1\nI0502 11:29:29.811856 2517 log.go:172] (0xc000641360) (1) Data frame handling\nI0502 11:29:29.811883 2517 log.go:172] (0xc000641360) (1) Data frame sent\nI0502 11:29:29.811899 2517 log.go:172] (0xc0001386e0) (0xc000641360) Stream removed, broadcasting: 1\nI0502 11:29:29.811915 2517 log.go:172] (0xc0001386e0) Go away received\nI0502 11:29:29.812134 2517 log.go:172] (0xc0001386e0) (0xc000641360) Stream removed, broadcasting: 1\nI0502 11:29:29.812168 2517 log.go:172] (0xc0001386e0) (0xc0003e8000) Stream removed, broadcasting: 3\nI0502 11:29:29.812191 2517 log.go:172] (0xc0001386e0) (0xc000596000) Stream removed, broadcasting: 5\n" May 2 11:29:29.817: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" May 2 11:29:29.817: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' May 2 11:29:39.838: INFO: Waiting for StatefulSet e2e-tests-statefulset-pc7sw/ss2 to complete update May 2 11:29:39.838: INFO: Waiting for Pod e2e-tests-statefulset-pc7sw/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd May 2 11:29:39.838: INFO: Waiting for Pod e2e-tests-statefulset-pc7sw/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd May 2 11:29:39.838: INFO: Waiting for Pod e2e-tests-statefulset-pc7sw/ss2-2 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd May 2 11:29:49.846: INFO: Waiting for StatefulSet e2e-tests-statefulset-pc7sw/ss2 to complete update May 2 11:29:49.846: INFO: Waiting for Pod e2e-tests-statefulset-pc7sw/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd May 2 11:29:49.846: INFO: Waiting for Pod e2e-tests-statefulset-pc7sw/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd May 2 11:29:59.847: INFO: Waiting for StatefulSet e2e-tests-statefulset-pc7sw/ss2 to complete update [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85 May 2 11:30:09.847: INFO: Deleting all statefulset in ns e2e-tests-statefulset-pc7sw May 2 11:30:09.850: INFO: Scaling statefulset ss2 to 0 May 2 11:30:29.879: INFO: Waiting for statefulset status.replicas updated to 0 May 2 11:30:29.882: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 2 11:30:29.923: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-statefulset-pc7sw" for this suite. May 2 11:30:38.036: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 11:30:38.050: INFO: namespace: e2e-tests-statefulset-pc7sw, resource: bindings, ignored listing per whitelist May 2 11:30:38.130: INFO: namespace e2e-tests-statefulset-pc7sw deletion completed in 8.204201623s • [SLOW TEST:159.611 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 2 11:30:38.130: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test substitution in container's command May 2 11:30:38.271: INFO: Waiting up to 5m0s for pod "var-expansion-58cc899d-8c68-11ea-8045-0242ac110017" in namespace "e2e-tests-var-expansion-z5fxd" to be "success or failure" May 2 11:30:38.274: INFO: Pod "var-expansion-58cc899d-8c68-11ea-8045-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.626865ms May 2 11:30:40.300: INFO: Pod "var-expansion-58cc899d-8c68-11ea-8045-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029225755s May 2 11:30:42.400: INFO: Pod "var-expansion-58cc899d-8c68-11ea-8045-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.129396816s STEP: Saw pod success May 2 11:30:42.401: INFO: Pod "var-expansion-58cc899d-8c68-11ea-8045-0242ac110017" satisfied condition "success or failure" May 2 11:30:42.616: INFO: Trying to get logs from node hunter-worker pod var-expansion-58cc899d-8c68-11ea-8045-0242ac110017 container dapi-container: STEP: delete the pod May 2 11:30:42.676: INFO: Waiting for pod var-expansion-58cc899d-8c68-11ea-8045-0242ac110017 to disappear May 2 11:30:42.792: INFO: Pod var-expansion-58cc899d-8c68-11ea-8045-0242ac110017 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 2 11:30:42.792: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-var-expansion-z5fxd" for this suite. May 2 11:30:48.828: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 11:30:48.880: INFO: namespace: e2e-tests-var-expansion-z5fxd, resource: bindings, ignored listing per whitelist May 2 11:30:48.905: INFO: namespace e2e-tests-var-expansion-z5fxd deletion completed in 6.109293595s • [SLOW TEST:10.775 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 2 11:30:48.906: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name s-test-opt-del-5f3836a7-8c68-11ea-8045-0242ac110017 STEP: Creating secret with name s-test-opt-upd-5f383702-8c68-11ea-8045-0242ac110017 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-5f3836a7-8c68-11ea-8045-0242ac110017 STEP: Updating secret s-test-opt-upd-5f383702-8c68-11ea-8045-0242ac110017 STEP: Creating secret with name s-test-opt-create-5f383720-8c68-11ea-8045-0242ac110017 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 2 11:32:21.749: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-bdszf" for this suite. May 2 11:32:45.784: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 11:32:45.800: INFO: namespace: e2e-tests-secrets-bdszf, resource: bindings, ignored listing per whitelist May 2 11:32:45.860: INFO: namespace e2e-tests-secrets-bdszf deletion completed in 24.108103795s • [SLOW TEST:116.954 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 2 11:32:45.860: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test use defaults May 2 11:32:46.049: INFO: Waiting up to 5m0s for pod "client-containers-a4f407a2-8c68-11ea-8045-0242ac110017" in namespace "e2e-tests-containers-zlfwg" to be "success or failure" May 2 11:32:46.062: INFO: Pod "client-containers-a4f407a2-8c68-11ea-8045-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 12.525107ms May 2 11:32:48.223: INFO: Pod "client-containers-a4f407a2-8c68-11ea-8045-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.173383423s May 2 11:32:50.227: INFO: Pod "client-containers-a4f407a2-8c68-11ea-8045-0242ac110017": Phase="Running", Reason="", readiness=true. Elapsed: 4.177959173s May 2 11:32:52.232: INFO: Pod "client-containers-a4f407a2-8c68-11ea-8045-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.18245885s STEP: Saw pod success May 2 11:32:52.232: INFO: Pod "client-containers-a4f407a2-8c68-11ea-8045-0242ac110017" satisfied condition "success or failure" May 2 11:32:52.235: INFO: Trying to get logs from node hunter-worker pod client-containers-a4f407a2-8c68-11ea-8045-0242ac110017 container test-container: STEP: delete the pod May 2 11:32:52.266: INFO: Waiting for pod client-containers-a4f407a2-8c68-11ea-8045-0242ac110017 to disappear May 2 11:32:52.282: INFO: Pod client-containers-a4f407a2-8c68-11ea-8045-0242ac110017 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 2 11:32:52.282: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-containers-zlfwg" for this suite. May 2 11:32:58.333: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 11:32:58.354: INFO: namespace: e2e-tests-containers-zlfwg, resource: bindings, ignored listing per whitelist May 2 11:32:58.403: INFO: namespace e2e-tests-containers-zlfwg deletion completed in 6.118854499s • [SLOW TEST:12.543 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 2 11:32:58.404: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod STEP: setting up watch STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: verifying pod creation was observed May 2 11:33:02.547: INFO: running pod: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-submit-remove-ac64f6b6-8c68-11ea-8045-0242ac110017", GenerateName:"", Namespace:"e2e-tests-pods-6bx55", SelfLink:"/api/v1/namespaces/e2e-tests-pods-6bx55/pods/pod-submit-remove-ac64f6b6-8c68-11ea-8045-0242ac110017", UID:"ac68a49e-8c68-11ea-99e8-0242ac110002", ResourceVersion:"8340235", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63724015978, loc:(*time.Location)(0x7950ac0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"500700303"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-tmwkk", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc00115cd40), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"nginx", Image:"docker.io/library/nginx:1.14-alpine", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-tmwkk", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc0018ded48), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"hunter-worker2", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0021d8300), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0018ded90)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0018dedb0)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc0018dedb8), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc0018dedbc)}, Status:v1.PodStatus{Phase:"Running", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724015978, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"Ready", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724015981, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"ContainersReady", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724015981, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724015978, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.17.0.4", PodIP:"10.244.2.200", StartTime:(*v1.Time)(0xc00153bea0), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"nginx", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(0xc00153bec0), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:true, RestartCount:0, Image:"docker.io/library/nginx:1.14-alpine", ImageID:"docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7", ContainerID:"containerd://2e7447ed2ad3e3abbcc6a33546baddcf3979d9c3faac0afd1cae2877fe9179a5"}}, QOSClass:"BestEffort"}} STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice STEP: verifying pod deletion was observed [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 2 11:33:11.732: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-6bx55" for this suite. May 2 11:33:17.747: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 11:33:17.782: INFO: namespace: e2e-tests-pods-6bx55, resource: bindings, ignored listing per whitelist May 2 11:33:17.872: INFO: namespace e2e-tests-pods-6bx55 deletion completed in 6.137143493s • [SLOW TEST:19.468 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 2 11:33:17.872: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward api env vars May 2 11:33:18.009: INFO: Waiting up to 5m0s for pod "downward-api-b801c66e-8c68-11ea-8045-0242ac110017" in namespace "e2e-tests-downward-api-wlqvv" to be "success or failure" May 2 11:33:18.026: INFO: Pod "downward-api-b801c66e-8c68-11ea-8045-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 16.707344ms May 2 11:33:20.031: INFO: Pod "downward-api-b801c66e-8c68-11ea-8045-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021148009s May 2 11:33:22.036: INFO: Pod "downward-api-b801c66e-8c68-11ea-8045-0242ac110017": Phase="Running", Reason="", readiness=true. Elapsed: 4.02619631s May 2 11:33:24.040: INFO: Pod "downward-api-b801c66e-8c68-11ea-8045-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.030616136s STEP: Saw pod success May 2 11:33:24.040: INFO: Pod "downward-api-b801c66e-8c68-11ea-8045-0242ac110017" satisfied condition "success or failure" May 2 11:33:24.043: INFO: Trying to get logs from node hunter-worker pod downward-api-b801c66e-8c68-11ea-8045-0242ac110017 container dapi-container: STEP: delete the pod May 2 11:33:24.086: INFO: Waiting for pod downward-api-b801c66e-8c68-11ea-8045-0242ac110017 to disappear May 2 11:33:24.098: INFO: Pod downward-api-b801c66e-8c68-11ea-8045-0242ac110017 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 2 11:33:24.098: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-wlqvv" for this suite. May 2 11:33:30.113: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 11:33:30.155: INFO: namespace: e2e-tests-downward-api-wlqvv, resource: bindings, ignored listing per whitelist May 2 11:33:30.196: INFO: namespace e2e-tests-downward-api-wlqvv deletion completed in 6.094580043s • [SLOW TEST:12.324 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38 should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 2 11:33:30.197: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a watch on configmaps with label A STEP: creating a watch on configmaps with label B STEP: creating a watch on configmaps with label A or B STEP: creating a configmap with label A and ensuring the correct watchers observe the notification May 2 11:33:31.044: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-z7bjk,SelfLink:/api/v1/namespaces/e2e-tests-watch-z7bjk/configmaps/e2e-watch-test-configmap-a,UID:bfc65fb3-8c68-11ea-99e8-0242ac110002,ResourceVersion:8340336,Generation:0,CreationTimestamp:2020-05-02 11:33:31 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} May 2 11:33:31.044: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-z7bjk,SelfLink:/api/v1/namespaces/e2e-tests-watch-z7bjk/configmaps/e2e-watch-test-configmap-a,UID:bfc65fb3-8c68-11ea-99e8-0242ac110002,ResourceVersion:8340336,Generation:0,CreationTimestamp:2020-05-02 11:33:31 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: modifying configmap A and ensuring the correct watchers observe the notification May 2 11:33:41.054: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-z7bjk,SelfLink:/api/v1/namespaces/e2e-tests-watch-z7bjk/configmaps/e2e-watch-test-configmap-a,UID:bfc65fb3-8c68-11ea-99e8-0242ac110002,ResourceVersion:8340355,Generation:0,CreationTimestamp:2020-05-02 11:33:31 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} May 2 11:33:41.054: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-z7bjk,SelfLink:/api/v1/namespaces/e2e-tests-watch-z7bjk/configmaps/e2e-watch-test-configmap-a,UID:bfc65fb3-8c68-11ea-99e8-0242ac110002,ResourceVersion:8340355,Generation:0,CreationTimestamp:2020-05-02 11:33:31 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying configmap A again and ensuring the correct watchers observe the notification May 2 11:33:51.063: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-z7bjk,SelfLink:/api/v1/namespaces/e2e-tests-watch-z7bjk/configmaps/e2e-watch-test-configmap-a,UID:bfc65fb3-8c68-11ea-99e8-0242ac110002,ResourceVersion:8340375,Generation:0,CreationTimestamp:2020-05-02 11:33:31 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} May 2 11:33:51.063: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-z7bjk,SelfLink:/api/v1/namespaces/e2e-tests-watch-z7bjk/configmaps/e2e-watch-test-configmap-a,UID:bfc65fb3-8c68-11ea-99e8-0242ac110002,ResourceVersion:8340375,Generation:0,CreationTimestamp:2020-05-02 11:33:31 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: deleting configmap A and ensuring the correct watchers observe the notification May 2 11:34:01.070: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-z7bjk,SelfLink:/api/v1/namespaces/e2e-tests-watch-z7bjk/configmaps/e2e-watch-test-configmap-a,UID:bfc65fb3-8c68-11ea-99e8-0242ac110002,ResourceVersion:8340395,Generation:0,CreationTimestamp:2020-05-02 11:33:31 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} May 2 11:34:01.070: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-z7bjk,SelfLink:/api/v1/namespaces/e2e-tests-watch-z7bjk/configmaps/e2e-watch-test-configmap-a,UID:bfc65fb3-8c68-11ea-99e8-0242ac110002,ResourceVersion:8340395,Generation:0,CreationTimestamp:2020-05-02 11:33:31 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: creating a configmap with label B and ensuring the correct watchers observe the notification May 2 11:34:11.078: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-z7bjk,SelfLink:/api/v1/namespaces/e2e-tests-watch-z7bjk/configmaps/e2e-watch-test-configmap-b,UID:d7a6808a-8c68-11ea-99e8-0242ac110002,ResourceVersion:8340415,Generation:0,CreationTimestamp:2020-05-02 11:34:11 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} May 2 11:34:11.078: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-z7bjk,SelfLink:/api/v1/namespaces/e2e-tests-watch-z7bjk/configmaps/e2e-watch-test-configmap-b,UID:d7a6808a-8c68-11ea-99e8-0242ac110002,ResourceVersion:8340415,Generation:0,CreationTimestamp:2020-05-02 11:34:11 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: deleting configmap B and ensuring the correct watchers observe the notification May 2 11:34:21.086: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-z7bjk,SelfLink:/api/v1/namespaces/e2e-tests-watch-z7bjk/configmaps/e2e-watch-test-configmap-b,UID:d7a6808a-8c68-11ea-99e8-0242ac110002,ResourceVersion:8340435,Generation:0,CreationTimestamp:2020-05-02 11:34:11 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} May 2 11:34:21.086: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-z7bjk,SelfLink:/api/v1/namespaces/e2e-tests-watch-z7bjk/configmaps/e2e-watch-test-configmap-b,UID:d7a6808a-8c68-11ea-99e8-0242ac110002,ResourceVersion:8340435,Generation:0,CreationTimestamp:2020-05-02 11:34:11 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 2 11:34:31.086: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-watch-z7bjk" for this suite. May 2 11:34:37.123: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 11:34:37.134: INFO: namespace: e2e-tests-watch-z7bjk, resource: bindings, ignored listing per whitelist May 2 11:34:37.194: INFO: namespace e2e-tests-watch-z7bjk deletion completed in 6.103754645s • [SLOW TEST:66.997 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [sig-network] Service endpoints latency should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 2 11:34:37.194: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svc-latency STEP: Waiting for a default service account to be provisioned in namespace [It] should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating replication controller svc-latency-rc in namespace e2e-tests-svc-latency-bqxfx I0502 11:34:37.424196 6 runners.go:184] Created replication controller with name: svc-latency-rc, namespace: e2e-tests-svc-latency-bqxfx, replica count: 1 I0502 11:34:38.474585 6 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0502 11:34:39.474801 6 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0502 11:34:40.474991 6 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0502 11:34:41.475154 6 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0502 11:34:42.475408 6 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 2 11:34:42.622: INFO: Created: latency-svc-2vf9c May 2 11:34:42.649: INFO: Got endpoints: latency-svc-2vf9c [73.872304ms] May 2 11:34:42.711: INFO: Created: latency-svc-gqpq5 May 2 11:34:42.730: INFO: Got endpoints: latency-svc-gqpq5 [80.442048ms] May 2 11:34:42.782: INFO: Created: latency-svc-6p9z7 May 2 11:34:42.846: INFO: Got endpoints: latency-svc-6p9z7 [196.872202ms] May 2 11:34:42.873: INFO: Created: latency-svc-52pfh May 2 11:34:42.886: INFO: Got endpoints: latency-svc-52pfh [236.879162ms] May 2 11:34:42.938: INFO: Created: latency-svc-kk8b2 May 2 11:34:42.984: INFO: Got endpoints: latency-svc-kk8b2 [334.385304ms] May 2 11:34:43.010: INFO: Created: latency-svc-9526c May 2 11:34:43.042: INFO: Got endpoints: latency-svc-9526c [392.435285ms] May 2 11:34:43.070: INFO: Created: latency-svc-2xgdn May 2 11:34:43.084: INFO: Got endpoints: latency-svc-2xgdn [434.100017ms] May 2 11:34:43.134: INFO: Created: latency-svc-5x2k7 May 2 11:34:43.138: INFO: Got endpoints: latency-svc-5x2k7 [488.822767ms] May 2 11:34:43.180: INFO: Created: latency-svc-cjt56 May 2 11:34:43.198: INFO: Got endpoints: latency-svc-cjt56 [548.767586ms] May 2 11:34:43.221: INFO: Created: latency-svc-w9p5g May 2 11:34:43.230: INFO: Got endpoints: latency-svc-w9p5g [581.029088ms] May 2 11:34:43.302: INFO: Created: latency-svc-gl4tr May 2 11:34:43.308: INFO: Got endpoints: latency-svc-gl4tr [658.698599ms] May 2 11:34:43.348: INFO: Created: latency-svc-6pqwg May 2 11:34:43.370: INFO: Got endpoints: latency-svc-6pqwg [720.145126ms] May 2 11:34:43.502: INFO: Created: latency-svc-v2zcr May 2 11:34:43.543: INFO: Got endpoints: latency-svc-v2zcr [893.864234ms] May 2 11:34:43.570: INFO: Created: latency-svc-4sbtc May 2 11:34:43.585: INFO: Got endpoints: latency-svc-4sbtc [935.964451ms] May 2 11:34:43.655: INFO: Created: latency-svc-pkzmn May 2 11:34:43.658: INFO: Got endpoints: latency-svc-pkzmn [1.008101827s] May 2 11:34:43.694: INFO: Created: latency-svc-2s98l May 2 11:34:43.711: INFO: Got endpoints: latency-svc-2s98l [1.061117245s] May 2 11:34:43.736: INFO: Created: latency-svc-458r8 May 2 11:34:43.747: INFO: Got endpoints: latency-svc-458r8 [1.017380125s] May 2 11:34:43.797: INFO: Created: latency-svc-pz7hh May 2 11:34:43.813: INFO: Got endpoints: latency-svc-pz7hh [967.342243ms] May 2 11:34:43.836: INFO: Created: latency-svc-p54kb May 2 11:34:43.851: INFO: Got endpoints: latency-svc-p54kb [964.401021ms] May 2 11:34:43.973: INFO: Created: latency-svc-sqds7 May 2 11:34:43.977: INFO: Got endpoints: latency-svc-sqds7 [993.14559ms] May 2 11:34:44.211: INFO: Created: latency-svc-5kdgf May 2 11:34:44.410: INFO: Got endpoints: latency-svc-5kdgf [1.368235073s] May 2 11:34:44.579: INFO: Created: latency-svc-jljbc May 2 11:34:44.590: INFO: Got endpoints: latency-svc-jljbc [1.505982975s] May 2 11:34:44.822: INFO: Created: latency-svc-6dm26 May 2 11:34:44.877: INFO: Got endpoints: latency-svc-6dm26 [1.738806827s] May 2 11:34:45.058: INFO: Created: latency-svc-tmz9m May 2 11:34:45.062: INFO: Got endpoints: latency-svc-tmz9m [1.863447067s] May 2 11:34:45.207: INFO: Created: latency-svc-vzj5f May 2 11:34:45.238: INFO: Got endpoints: latency-svc-vzj5f [2.007563052s] May 2 11:34:45.239: INFO: Created: latency-svc-gcc6w May 2 11:34:45.262: INFO: Got endpoints: latency-svc-gcc6w [1.953948234s] May 2 11:34:45.303: INFO: Created: latency-svc-r85b9 May 2 11:34:45.352: INFO: Got endpoints: latency-svc-r85b9 [1.982621146s] May 2 11:34:45.401: INFO: Created: latency-svc-mh8l9 May 2 11:34:45.412: INFO: Got endpoints: latency-svc-mh8l9 [1.868683116s] May 2 11:34:45.439: INFO: Created: latency-svc-cqqm8 May 2 11:34:45.511: INFO: Got endpoints: latency-svc-cqqm8 [1.925942331s] May 2 11:34:45.539: INFO: Created: latency-svc-6tqt9 May 2 11:34:45.551: INFO: Got endpoints: latency-svc-6tqt9 [1.893486244s] May 2 11:34:45.586: INFO: Created: latency-svc-gxjjb May 2 11:34:45.599: INFO: Got endpoints: latency-svc-gxjjb [1.888525285s] May 2 11:34:45.661: INFO: Created: latency-svc-tljbv May 2 11:34:45.684: INFO: Got endpoints: latency-svc-tljbv [1.936427707s] May 2 11:34:45.742: INFO: Created: latency-svc-94xhv May 2 11:34:45.756: INFO: Got endpoints: latency-svc-94xhv [1.942570261s] May 2 11:34:45.816: INFO: Created: latency-svc-bcjxz May 2 11:34:45.841: INFO: Got endpoints: latency-svc-bcjxz [1.990594372s] May 2 11:34:45.896: INFO: Created: latency-svc-bhwl6 May 2 11:34:45.906: INFO: Got endpoints: latency-svc-bhwl6 [1.929340175s] May 2 11:34:45.970: INFO: Created: latency-svc-bpgdp May 2 11:34:45.998: INFO: Got endpoints: latency-svc-bpgdp [1.587801434s] May 2 11:34:46.141: INFO: Created: latency-svc-kt2d4 May 2 11:34:46.143: INFO: Got endpoints: latency-svc-kt2d4 [1.553564559s] May 2 11:34:46.174: INFO: Created: latency-svc-sx5fm May 2 11:34:46.199: INFO: Got endpoints: latency-svc-sx5fm [1.321775362s] May 2 11:34:46.239: INFO: Created: latency-svc-x7q7f May 2 11:34:46.290: INFO: Got endpoints: latency-svc-x7q7f [1.228517785s] May 2 11:34:46.304: INFO: Created: latency-svc-4rj9r May 2 11:34:46.319: INFO: Got endpoints: latency-svc-4rj9r [1.080440126s] May 2 11:34:46.340: INFO: Created: latency-svc-rktjw May 2 11:34:46.355: INFO: Got endpoints: latency-svc-rktjw [1.092613119s] May 2 11:34:46.378: INFO: Created: latency-svc-lj82s May 2 11:34:46.427: INFO: Got endpoints: latency-svc-lj82s [1.074838922s] May 2 11:34:46.451: INFO: Created: latency-svc-j2ws6 May 2 11:34:46.463: INFO: Got endpoints: latency-svc-j2ws6 [1.051128408s] May 2 11:34:46.490: INFO: Created: latency-svc-qpcml May 2 11:34:46.506: INFO: Got endpoints: latency-svc-qpcml [994.399483ms] May 2 11:34:46.584: INFO: Created: latency-svc-d57qn May 2 11:34:46.596: INFO: Got endpoints: latency-svc-d57qn [1.044831674s] May 2 11:34:46.642: INFO: Created: latency-svc-nqms7 May 2 11:34:46.656: INFO: Got endpoints: latency-svc-nqms7 [1.056644876s] May 2 11:34:46.757: INFO: Created: latency-svc-5zrxb May 2 11:34:46.760: INFO: Got endpoints: latency-svc-5zrxb [1.076782909s] May 2 11:34:46.857: INFO: Created: latency-svc-qjk7c May 2 11:34:46.912: INFO: Got endpoints: latency-svc-qjk7c [1.156235884s] May 2 11:34:46.936: INFO: Created: latency-svc-tx254 May 2 11:34:46.951: INFO: Got endpoints: latency-svc-tx254 [1.109215674s] May 2 11:34:46.976: INFO: Created: latency-svc-kgn8d May 2 11:34:46.993: INFO: Got endpoints: latency-svc-kgn8d [1.086812715s] May 2 11:34:47.104: INFO: Created: latency-svc-hj84z May 2 11:34:47.113: INFO: Got endpoints: latency-svc-hj84z [1.115001988s] May 2 11:34:47.144: INFO: Created: latency-svc-2fzwh May 2 11:34:47.161: INFO: Got endpoints: latency-svc-2fzwh [1.018027196s] May 2 11:34:47.266: INFO: Created: latency-svc-dlzcd May 2 11:34:47.269: INFO: Got endpoints: latency-svc-dlzcd [1.070392835s] May 2 11:34:47.337: INFO: Created: latency-svc-57w4v May 2 11:34:47.354: INFO: Got endpoints: latency-svc-57w4v [1.063174711s] May 2 11:34:47.441: INFO: Created: latency-svc-cf77p May 2 11:34:47.462: INFO: Got endpoints: latency-svc-cf77p [1.143629502s] May 2 11:34:47.518: INFO: Created: latency-svc-qshk4 May 2 11:34:47.589: INFO: Got endpoints: latency-svc-qshk4 [1.23390579s] May 2 11:34:47.595: INFO: Created: latency-svc-45vvl May 2 11:34:47.618: INFO: Got endpoints: latency-svc-45vvl [1.190992146s] May 2 11:34:47.661: INFO: Created: latency-svc-tp89f May 2 11:34:47.751: INFO: Got endpoints: latency-svc-tp89f [1.28722502s] May 2 11:34:47.753: INFO: Created: latency-svc-pgz2m May 2 11:34:47.781: INFO: Got endpoints: latency-svc-pgz2m [1.274878036s] May 2 11:34:47.852: INFO: Created: latency-svc-zm8rs May 2 11:34:48.100: INFO: Got endpoints: latency-svc-zm8rs [1.504053743s] May 2 11:34:48.332: INFO: Created: latency-svc-thnbl May 2 11:34:48.356: INFO: Got endpoints: latency-svc-thnbl [1.700407314s] May 2 11:34:48.426: INFO: Created: latency-svc-kpxnf May 2 11:34:48.523: INFO: Got endpoints: latency-svc-kpxnf [1.762603448s] May 2 11:34:48.770: INFO: Created: latency-svc-sdchd May 2 11:34:48.782: INFO: Got endpoints: latency-svc-sdchd [1.869974583s] May 2 11:34:48.972: INFO: Created: latency-svc-q4l29 May 2 11:34:49.242: INFO: Got endpoints: latency-svc-q4l29 [2.291873058s] May 2 11:34:49.247: INFO: Created: latency-svc-7h77v May 2 11:34:49.321: INFO: Got endpoints: latency-svc-7h77v [2.327266056s] May 2 11:34:49.416: INFO: Created: latency-svc-td4l9 May 2 11:34:49.453: INFO: Got endpoints: latency-svc-td4l9 [2.340052411s] May 2 11:34:49.504: INFO: Created: latency-svc-8scz6 May 2 11:34:49.572: INFO: Got endpoints: latency-svc-8scz6 [2.410459615s] May 2 11:34:49.577: INFO: Created: latency-svc-6nrhl May 2 11:34:49.584: INFO: Got endpoints: latency-svc-6nrhl [2.31477104s] May 2 11:34:49.613: INFO: Created: latency-svc-5qglh May 2 11:34:49.626: INFO: Got endpoints: latency-svc-5qglh [2.272826721s] May 2 11:34:49.648: INFO: Created: latency-svc-8kqb2 May 2 11:34:49.657: INFO: Got endpoints: latency-svc-8kqb2 [2.194252422s] May 2 11:34:49.733: INFO: Created: latency-svc-t7jpd May 2 11:34:49.763: INFO: Got endpoints: latency-svc-t7jpd [2.173644924s] May 2 11:34:49.825: INFO: Created: latency-svc-ls94r May 2 11:34:50.063: INFO: Got endpoints: latency-svc-ls94r [2.444445246s] May 2 11:34:50.357: INFO: Created: latency-svc-4mzfl May 2 11:34:50.362: INFO: Got endpoints: latency-svc-4mzfl [2.611792021s] May 2 11:34:50.399: INFO: Created: latency-svc-rzwfz May 2 11:34:50.425: INFO: Got endpoints: latency-svc-rzwfz [2.644372548s] May 2 11:34:50.594: INFO: Created: latency-svc-2hh6c May 2 11:34:50.635: INFO: Got endpoints: latency-svc-2hh6c [2.53501332s] May 2 11:34:50.775: INFO: Created: latency-svc-p74rb May 2 11:34:50.808: INFO: Got endpoints: latency-svc-p74rb [2.451812492s] May 2 11:34:50.844: INFO: Created: latency-svc-x4qm5 May 2 11:34:50.858: INFO: Got endpoints: latency-svc-x4qm5 [2.334510047s] May 2 11:34:50.961: INFO: Created: latency-svc-qs75t May 2 11:34:50.964: INFO: Got endpoints: latency-svc-qs75t [2.181934749s] May 2 11:34:50.999: INFO: Created: latency-svc-gjbc5 May 2 11:34:51.014: INFO: Got endpoints: latency-svc-gjbc5 [1.77103893s] May 2 11:34:51.036: INFO: Created: latency-svc-zxmjd May 2 11:34:51.056: INFO: Got endpoints: latency-svc-zxmjd [1.735081679s] May 2 11:34:51.104: INFO: Created: latency-svc-tk7kl May 2 11:34:51.127: INFO: Got endpoints: latency-svc-tk7kl [1.673650682s] May 2 11:34:51.175: INFO: Created: latency-svc-2wjc2 May 2 11:34:51.194: INFO: Got endpoints: latency-svc-2wjc2 [1.622581379s] May 2 11:34:51.576: INFO: Created: latency-svc-mcvb4 May 2 11:34:51.609: INFO: Got endpoints: latency-svc-mcvb4 [2.024645242s] May 2 11:34:51.775: INFO: Created: latency-svc-kr5pf May 2 11:34:51.778: INFO: Got endpoints: latency-svc-kr5pf [2.15109316s] May 2 11:34:51.812: INFO: Created: latency-svc-74hz7 May 2 11:34:51.824: INFO: Got endpoints: latency-svc-74hz7 [2.167392393s] May 2 11:34:51.843: INFO: Created: latency-svc-qjr65 May 2 11:34:51.854: INFO: Got endpoints: latency-svc-qjr65 [2.09184868s] May 2 11:34:52.058: INFO: Created: latency-svc-x4hlv May 2 11:34:52.064: INFO: Got endpoints: latency-svc-x4hlv [2.0011278s] May 2 11:34:52.316: INFO: Created: latency-svc-dnknf May 2 11:34:52.368: INFO: Got endpoints: latency-svc-dnknf [2.005368263s] May 2 11:34:52.442: INFO: Created: latency-svc-9qddh May 2 11:34:52.455: INFO: Got endpoints: latency-svc-9qddh [2.030003678s] May 2 11:34:52.483: INFO: Created: latency-svc-bjfln May 2 11:34:52.503: INFO: Got endpoints: latency-svc-bjfln [1.867736103s] May 2 11:34:52.527: INFO: Created: latency-svc-48s2m May 2 11:34:52.539: INFO: Got endpoints: latency-svc-48s2m [1.730521972s] May 2 11:34:52.609: INFO: Created: latency-svc-fvvc2 May 2 11:34:52.617: INFO: Got endpoints: latency-svc-fvvc2 [1.759447408s] May 2 11:34:52.647: INFO: Created: latency-svc-q5pj5 May 2 11:34:52.787: INFO: Created: latency-svc-mbxr8 May 2 11:34:52.794: INFO: Got endpoints: latency-svc-q5pj5 [1.829058347s] May 2 11:34:52.825: INFO: Got endpoints: latency-svc-mbxr8 [1.81137776s] May 2 11:34:52.885: INFO: Created: latency-svc-hvx9n May 2 11:34:53.008: INFO: Got endpoints: latency-svc-hvx9n [1.952290274s] May 2 11:34:53.009: INFO: Created: latency-svc-tmnrf May 2 11:34:53.044: INFO: Got endpoints: latency-svc-tmnrf [1.91695838s] May 2 11:34:53.089: INFO: Created: latency-svc-4x2np May 2 11:34:53.104: INFO: Got endpoints: latency-svc-4x2np [1.90928754s] May 2 11:34:53.164: INFO: Created: latency-svc-mh2vw May 2 11:34:53.176: INFO: Got endpoints: latency-svc-mh2vw [1.567033565s] May 2 11:34:53.204: INFO: Created: latency-svc-6zr9c May 2 11:34:53.220: INFO: Got endpoints: latency-svc-6zr9c [1.441968099s] May 2 11:34:53.240: INFO: Created: latency-svc-gr8sl May 2 11:34:53.319: INFO: Got endpoints: latency-svc-gr8sl [1.495385719s] May 2 11:34:53.330: INFO: Created: latency-svc-nngkk May 2 11:34:53.352: INFO: Got endpoints: latency-svc-nngkk [1.497166193s] May 2 11:34:53.390: INFO: Created: latency-svc-4j52b May 2 11:34:53.542: INFO: Got endpoints: latency-svc-4j52b [1.477524936s] May 2 11:34:53.556: INFO: Created: latency-svc-b2qmk May 2 11:34:53.574: INFO: Got endpoints: latency-svc-b2qmk [1.205725975s] May 2 11:34:53.605: INFO: Created: latency-svc-zzhr4 May 2 11:34:53.616: INFO: Got endpoints: latency-svc-zzhr4 [1.160341029s] May 2 11:34:53.709: INFO: Created: latency-svc-x8dmh May 2 11:34:53.711: INFO: Got endpoints: latency-svc-x8dmh [1.208199752s] May 2 11:34:53.748: INFO: Created: latency-svc-74569 May 2 11:34:53.784: INFO: Got endpoints: latency-svc-74569 [1.244817598s] May 2 11:34:54.342: INFO: Created: latency-svc-f5j58 May 2 11:34:54.643: INFO: Got endpoints: latency-svc-f5j58 [2.0260524s] May 2 11:34:54.649: INFO: Created: latency-svc-stb9d May 2 11:34:54.708: INFO: Got endpoints: latency-svc-stb9d [1.914474297s] May 2 11:34:54.798: INFO: Created: latency-svc-bsmtg May 2 11:34:54.839: INFO: Got endpoints: latency-svc-bsmtg [2.013476475s] May 2 11:34:54.839: INFO: Created: latency-svc-f4fjt May 2 11:34:54.863: INFO: Got endpoints: latency-svc-f4fjt [1.855086128s] May 2 11:34:54.887: INFO: Created: latency-svc-jljgr May 2 11:34:54.954: INFO: Got endpoints: latency-svc-jljgr [1.910446682s] May 2 11:34:54.956: INFO: Created: latency-svc-q2kln May 2 11:34:54.971: INFO: Got endpoints: latency-svc-q2kln [1.867634018s] May 2 11:34:54.991: INFO: Created: latency-svc-g7xnd May 2 11:34:55.008: INFO: Got endpoints: latency-svc-g7xnd [1.831880763s] May 2 11:34:55.031: INFO: Created: latency-svc-zhzcq May 2 11:34:55.044: INFO: Got endpoints: latency-svc-zhzcq [1.824455781s] May 2 11:34:55.104: INFO: Created: latency-svc-ctws8 May 2 11:34:55.114: INFO: Got endpoints: latency-svc-ctws8 [1.79492557s] May 2 11:34:55.145: INFO: Created: latency-svc-6xcb7 May 2 11:34:55.159: INFO: Got endpoints: latency-svc-6xcb7 [1.807424553s] May 2 11:34:55.189: INFO: Created: latency-svc-jzq68 May 2 11:34:55.201: INFO: Got endpoints: latency-svc-jzq68 [1.659554614s] May 2 11:34:55.290: INFO: Created: latency-svc-fsd87 May 2 11:34:55.293: INFO: Got endpoints: latency-svc-fsd87 [1.719342041s] May 2 11:34:55.367: INFO: Created: latency-svc-ggb9w May 2 11:34:55.434: INFO: Got endpoints: latency-svc-ggb9w [1.817921278s] May 2 11:34:55.440: INFO: Created: latency-svc-xh967 May 2 11:34:55.455: INFO: Got endpoints: latency-svc-xh967 [1.743139353s] May 2 11:34:55.476: INFO: Created: latency-svc-8gcb5 May 2 11:34:55.490: INFO: Got endpoints: latency-svc-8gcb5 [1.706197054s] May 2 11:34:55.523: INFO: Created: latency-svc-wvbsj May 2 11:34:55.589: INFO: Got endpoints: latency-svc-wvbsj [945.711798ms] May 2 11:34:55.591: INFO: Created: latency-svc-vqn9f May 2 11:34:55.598: INFO: Got endpoints: latency-svc-vqn9f [890.087225ms] May 2 11:34:55.638: INFO: Created: latency-svc-tnw68 May 2 11:34:55.659: INFO: Got endpoints: latency-svc-tnw68 [820.254456ms] May 2 11:34:55.686: INFO: Created: latency-svc-jvnj5 May 2 11:34:55.774: INFO: Got endpoints: latency-svc-jvnj5 [911.098259ms] May 2 11:34:55.787: INFO: Created: latency-svc-vrbn4 May 2 11:34:55.817: INFO: Got endpoints: latency-svc-vrbn4 [862.8441ms] May 2 11:34:55.872: INFO: Created: latency-svc-cnkt8 May 2 11:34:55.918: INFO: Got endpoints: latency-svc-cnkt8 [946.709515ms] May 2 11:34:55.944: INFO: Created: latency-svc-8hjmf May 2 11:34:55.960: INFO: Got endpoints: latency-svc-8hjmf [951.577247ms] May 2 11:34:55.986: INFO: Created: latency-svc-d4tr6 May 2 11:34:56.002: INFO: Got endpoints: latency-svc-d4tr6 [957.714702ms] May 2 11:34:56.063: INFO: Created: latency-svc-bjtzn May 2 11:34:56.066: INFO: Got endpoints: latency-svc-bjtzn [951.841202ms] May 2 11:34:56.087: INFO: Created: latency-svc-lbfpk May 2 11:34:56.098: INFO: Got endpoints: latency-svc-lbfpk [938.916672ms] May 2 11:34:56.117: INFO: Created: latency-svc-n9d22 May 2 11:34:56.129: INFO: Got endpoints: latency-svc-n9d22 [927.259626ms] May 2 11:34:56.148: INFO: Created: latency-svc-62gsl May 2 11:34:56.218: INFO: Got endpoints: latency-svc-62gsl [924.876878ms] May 2 11:34:56.237: INFO: Created: latency-svc-9nwjd May 2 11:34:56.250: INFO: Got endpoints: latency-svc-9nwjd [815.885815ms] May 2 11:34:56.267: INFO: Created: latency-svc-6cj49 May 2 11:34:56.280: INFO: Got endpoints: latency-svc-6cj49 [825.295985ms] May 2 11:34:56.304: INFO: Created: latency-svc-2wlhz May 2 11:34:56.374: INFO: Got endpoints: latency-svc-2wlhz [883.678548ms] May 2 11:34:56.376: INFO: Created: latency-svc-rj8ln May 2 11:34:56.382: INFO: Got endpoints: latency-svc-rj8ln [792.813704ms] May 2 11:34:56.406: INFO: Created: latency-svc-qr2mn May 2 11:34:56.425: INFO: Got endpoints: latency-svc-qr2mn [826.651319ms] May 2 11:34:56.455: INFO: Created: latency-svc-nlcwz May 2 11:34:56.460: INFO: Got endpoints: latency-svc-nlcwz [801.384155ms] May 2 11:34:56.543: INFO: Created: latency-svc-8mkdt May 2 11:34:56.567: INFO: Got endpoints: latency-svc-8mkdt [792.333449ms] May 2 11:34:56.593: INFO: Created: latency-svc-vkb2c May 2 11:34:56.605: INFO: Got endpoints: latency-svc-vkb2c [787.984515ms] May 2 11:34:56.628: INFO: Created: latency-svc-78l29 May 2 11:34:56.697: INFO: Got endpoints: latency-svc-78l29 [778.736555ms] May 2 11:34:56.700: INFO: Created: latency-svc-w8w6t May 2 11:34:56.714: INFO: Got endpoints: latency-svc-w8w6t [754.06356ms] May 2 11:34:56.735: INFO: Created: latency-svc-mrkr2 May 2 11:34:56.750: INFO: Got endpoints: latency-svc-mrkr2 [748.533003ms] May 2 11:34:56.771: INFO: Created: latency-svc-6drfs May 2 11:34:56.780: INFO: Got endpoints: latency-svc-6drfs [713.527474ms] May 2 11:34:56.846: INFO: Created: latency-svc-rbjmf May 2 11:34:56.856: INFO: Got endpoints: latency-svc-rbjmf [757.58307ms] May 2 11:34:56.895: INFO: Created: latency-svc-w6gd8 May 2 11:34:56.938: INFO: Got endpoints: latency-svc-w6gd8 [809.661825ms] May 2 11:34:56.993: INFO: Created: latency-svc-t5znq May 2 11:34:57.008: INFO: Got endpoints: latency-svc-t5znq [790.398434ms] May 2 11:34:57.030: INFO: Created: latency-svc-qntfh May 2 11:34:57.058: INFO: Got endpoints: latency-svc-qntfh [808.463412ms] May 2 11:34:57.116: INFO: Created: latency-svc-tc99f May 2 11:34:57.123: INFO: Got endpoints: latency-svc-tc99f [842.85053ms] May 2 11:34:57.144: INFO: Created: latency-svc-8tznf May 2 11:34:57.153: INFO: Got endpoints: latency-svc-8tznf [779.129605ms] May 2 11:34:57.176: INFO: Created: latency-svc-lkhth May 2 11:34:57.202: INFO: Got endpoints: latency-svc-lkhth [820.118001ms] May 2 11:34:57.284: INFO: Created: latency-svc-4czzm May 2 11:34:57.292: INFO: Got endpoints: latency-svc-4czzm [866.75232ms] May 2 11:34:57.312: INFO: Created: latency-svc-9qztd May 2 11:34:57.328: INFO: Got endpoints: latency-svc-9qztd [867.932812ms] May 2 11:34:57.356: INFO: Created: latency-svc-lvf9p May 2 11:34:57.364: INFO: Got endpoints: latency-svc-lvf9p [797.595169ms] May 2 11:34:57.440: INFO: Created: latency-svc-h9jjf May 2 11:34:57.442: INFO: Got endpoints: latency-svc-h9jjf [836.859047ms] May 2 11:34:57.523: INFO: Created: latency-svc-7pfld May 2 11:34:57.619: INFO: Got endpoints: latency-svc-7pfld [921.529557ms] May 2 11:34:57.634: INFO: Created: latency-svc-mdzrb May 2 11:34:57.647: INFO: Got endpoints: latency-svc-mdzrb [933.119372ms] May 2 11:34:57.683: INFO: Created: latency-svc-5hn5j May 2 11:34:57.707: INFO: Got endpoints: latency-svc-5hn5j [956.905419ms] May 2 11:34:57.775: INFO: Created: latency-svc-lnh2l May 2 11:34:57.777: INFO: Got endpoints: latency-svc-lnh2l [997.418777ms] May 2 11:34:57.827: INFO: Created: latency-svc-kp452 May 2 11:34:57.858: INFO: Got endpoints: latency-svc-kp452 [1.002181169s] May 2 11:34:57.966: INFO: Created: latency-svc-jxlgb May 2 11:34:58.014: INFO: Got endpoints: latency-svc-jxlgb [1.075558816s] May 2 11:34:58.062: INFO: Created: latency-svc-dcqf7 May 2 11:34:58.130: INFO: Got endpoints: latency-svc-dcqf7 [1.121508646s] May 2 11:34:58.192: INFO: Created: latency-svc-jgjjm May 2 11:34:58.199: INFO: Got endpoints: latency-svc-jgjjm [1.141097516s] May 2 11:34:58.284: INFO: Created: latency-svc-plsrm May 2 11:34:58.286: INFO: Got endpoints: latency-svc-plsrm [1.163528004s] May 2 11:34:58.330: INFO: Created: latency-svc-cs4gk May 2 11:34:58.344: INFO: Got endpoints: latency-svc-cs4gk [1.191012741s] May 2 11:34:58.380: INFO: Created: latency-svc-w6j7c May 2 11:34:58.439: INFO: Got endpoints: latency-svc-w6j7c [1.23732714s] May 2 11:34:58.441: INFO: Created: latency-svc-csqnh May 2 11:34:58.452: INFO: Got endpoints: latency-svc-csqnh [1.160546308s] May 2 11:34:58.476: INFO: Created: latency-svc-r6qsz May 2 11:34:58.489: INFO: Got endpoints: latency-svc-r6qsz [1.160406689s] May 2 11:34:58.510: INFO: Created: latency-svc-dhfp5 May 2 11:34:58.625: INFO: Got endpoints: latency-svc-dhfp5 [1.260776195s] May 2 11:34:58.628: INFO: Created: latency-svc-kqxcr May 2 11:34:58.652: INFO: Got endpoints: latency-svc-kqxcr [1.209463733s] May 2 11:34:58.699: INFO: Created: latency-svc-zw2c2 May 2 11:34:58.712: INFO: Got endpoints: latency-svc-zw2c2 [1.093104909s] May 2 11:34:58.768: INFO: Created: latency-svc-z7n9k May 2 11:34:58.796: INFO: Got endpoints: latency-svc-z7n9k [1.149176369s] May 2 11:34:58.829: INFO: Created: latency-svc-fj6d5 May 2 11:34:58.845: INFO: Got endpoints: latency-svc-fj6d5 [1.137045422s] May 2 11:34:59.255: INFO: Created: latency-svc-qwsj6 May 2 11:34:59.332: INFO: Got endpoints: latency-svc-qwsj6 [1.554706124s] May 2 11:34:59.332: INFO: Created: latency-svc-pw6db May 2 11:34:59.446: INFO: Got endpoints: latency-svc-pw6db [1.58744362s] May 2 11:34:59.476: INFO: Created: latency-svc-kdwmc May 2 11:34:59.498: INFO: Got endpoints: latency-svc-kdwmc [1.484263347s] May 2 11:34:59.539: INFO: Created: latency-svc-5hzv4 May 2 11:34:59.607: INFO: Got endpoints: latency-svc-5hzv4 [1.476663946s] May 2 11:34:59.633: INFO: Created: latency-svc-9wvzg May 2 11:34:59.654: INFO: Got endpoints: latency-svc-9wvzg [1.454835026s] May 2 11:34:59.688: INFO: Created: latency-svc-hwdz9 May 2 11:34:59.804: INFO: Got endpoints: latency-svc-hwdz9 [1.518032012s] May 2 11:34:59.808: INFO: Created: latency-svc-b4ng9 May 2 11:34:59.837: INFO: Got endpoints: latency-svc-b4ng9 [1.492838921s] May 2 11:34:59.861: INFO: Created: latency-svc-h4ht4 May 2 11:34:59.877: INFO: Got endpoints: latency-svc-h4ht4 [1.436992592s] May 2 11:34:59.901: INFO: Created: latency-svc-68q2d May 2 11:34:59.984: INFO: Got endpoints: latency-svc-68q2d [1.531627128s] May 2 11:34:59.986: INFO: Created: latency-svc-bw776 May 2 11:34:59.997: INFO: Got endpoints: latency-svc-bw776 [1.508367019s] May 2 11:35:00.022: INFO: Created: latency-svc-cfjwl May 2 11:35:00.039: INFO: Got endpoints: latency-svc-cfjwl [1.414080854s] May 2 11:35:00.059: INFO: Created: latency-svc-fzzkd May 2 11:35:00.076: INFO: Got endpoints: latency-svc-fzzkd [1.42405435s] May 2 11:35:00.140: INFO: Created: latency-svc-s8rjz May 2 11:35:00.143: INFO: Got endpoints: latency-svc-s8rjz [1.430837219s] May 2 11:35:00.192: INFO: Created: latency-svc-jfp5c May 2 11:35:00.202: INFO: Got endpoints: latency-svc-jfp5c [1.405763506s] May 2 11:35:00.305: INFO: Created: latency-svc-hzmln May 2 11:35:00.309: INFO: Got endpoints: latency-svc-hzmln [1.464846012s] May 2 11:35:00.340: INFO: Created: latency-svc-5cfc6 May 2 11:35:00.353: INFO: Got endpoints: latency-svc-5cfc6 [1.020371975s] May 2 11:35:00.376: INFO: Created: latency-svc-5rkpc May 2 11:35:00.389: INFO: Got endpoints: latency-svc-5rkpc [943.521356ms] May 2 11:35:00.452: INFO: Created: latency-svc-rspn4 May 2 11:35:00.454: INFO: Got endpoints: latency-svc-rspn4 [955.765565ms] May 2 11:35:00.480: INFO: Created: latency-svc-r557x May 2 11:35:00.498: INFO: Got endpoints: latency-svc-r557x [890.869991ms] May 2 11:35:00.516: INFO: Created: latency-svc-trp72 May 2 11:35:00.540: INFO: Got endpoints: latency-svc-trp72 [885.387731ms] May 2 11:35:00.598: INFO: Created: latency-svc-mkf2g May 2 11:35:00.602: INFO: Got endpoints: latency-svc-mkf2g [797.086747ms] May 2 11:35:00.629: INFO: Created: latency-svc-89dcv May 2 11:35:00.648: INFO: Got endpoints: latency-svc-89dcv [811.196063ms] May 2 11:35:00.671: INFO: Created: latency-svc-w5trc May 2 11:35:00.685: INFO: Got endpoints: latency-svc-w5trc [807.975891ms] May 2 11:35:00.757: INFO: Created: latency-svc-689th May 2 11:35:00.759: INFO: Got endpoints: latency-svc-689th [775.313089ms] May 2 11:35:00.798: INFO: Created: latency-svc-2bh4c May 2 11:35:00.841: INFO: Got endpoints: latency-svc-2bh4c [843.767211ms] May 2 11:35:00.906: INFO: Created: latency-svc-tw2k7 May 2 11:35:00.928: INFO: Got endpoints: latency-svc-tw2k7 [888.774621ms] May 2 11:35:00.959: INFO: Created: latency-svc-n54xw May 2 11:35:00.973: INFO: Got endpoints: latency-svc-n54xw [897.597659ms] May 2 11:35:00.974: INFO: Latencies: [80.442048ms 196.872202ms 236.879162ms 334.385304ms 392.435285ms 434.100017ms 488.822767ms 548.767586ms 581.029088ms 658.698599ms 713.527474ms 720.145126ms 748.533003ms 754.06356ms 757.58307ms 775.313089ms 778.736555ms 779.129605ms 787.984515ms 790.398434ms 792.333449ms 792.813704ms 797.086747ms 797.595169ms 801.384155ms 807.975891ms 808.463412ms 809.661825ms 811.196063ms 815.885815ms 820.118001ms 820.254456ms 825.295985ms 826.651319ms 836.859047ms 842.85053ms 843.767211ms 862.8441ms 866.75232ms 867.932812ms 883.678548ms 885.387731ms 888.774621ms 890.087225ms 890.869991ms 893.864234ms 897.597659ms 911.098259ms 921.529557ms 924.876878ms 927.259626ms 933.119372ms 935.964451ms 938.916672ms 943.521356ms 945.711798ms 946.709515ms 951.577247ms 951.841202ms 955.765565ms 956.905419ms 957.714702ms 964.401021ms 967.342243ms 993.14559ms 994.399483ms 997.418777ms 1.002181169s 1.008101827s 1.017380125s 1.018027196s 1.020371975s 1.044831674s 1.051128408s 1.056644876s 1.061117245s 1.063174711s 1.070392835s 1.074838922s 1.075558816s 1.076782909s 1.080440126s 1.086812715s 1.092613119s 1.093104909s 1.109215674s 1.115001988s 1.121508646s 1.137045422s 1.141097516s 1.143629502s 1.149176369s 1.156235884s 1.160341029s 1.160406689s 1.160546308s 1.163528004s 1.190992146s 1.191012741s 1.205725975s 1.208199752s 1.209463733s 1.228517785s 1.23390579s 1.23732714s 1.244817598s 1.260776195s 1.274878036s 1.28722502s 1.321775362s 1.368235073s 1.405763506s 1.414080854s 1.42405435s 1.430837219s 1.436992592s 1.441968099s 1.454835026s 1.464846012s 1.476663946s 1.477524936s 1.484263347s 1.492838921s 1.495385719s 1.497166193s 1.504053743s 1.505982975s 1.508367019s 1.518032012s 1.531627128s 1.553564559s 1.554706124s 1.567033565s 1.58744362s 1.587801434s 1.622581379s 1.659554614s 1.673650682s 1.700407314s 1.706197054s 1.719342041s 1.730521972s 1.735081679s 1.738806827s 1.743139353s 1.759447408s 1.762603448s 1.77103893s 1.79492557s 1.807424553s 1.81137776s 1.817921278s 1.824455781s 1.829058347s 1.831880763s 1.855086128s 1.863447067s 1.867634018s 1.867736103s 1.868683116s 1.869974583s 1.888525285s 1.893486244s 1.90928754s 1.910446682s 1.914474297s 1.91695838s 1.925942331s 1.929340175s 1.936427707s 1.942570261s 1.952290274s 1.953948234s 1.982621146s 1.990594372s 2.0011278s 2.005368263s 2.007563052s 2.013476475s 2.024645242s 2.0260524s 2.030003678s 2.09184868s 2.15109316s 2.167392393s 2.173644924s 2.181934749s 2.194252422s 2.272826721s 2.291873058s 2.31477104s 2.327266056s 2.334510047s 2.340052411s 2.410459615s 2.444445246s 2.451812492s 2.53501332s 2.611792021s 2.644372548s] May 2 11:35:00.974: INFO: 50 %ile: 1.208199752s May 2 11:35:00.974: INFO: 90 %ile: 2.0260524s May 2 11:35:00.974: INFO: 99 %ile: 2.611792021s May 2 11:35:00.974: INFO: Total sample count: 200 [AfterEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 2 11:35:00.974: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-svc-latency-bqxfx" for this suite. May 2 11:35:34.988: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 11:35:35.043: INFO: namespace: e2e-tests-svc-latency-bqxfx, resource: bindings, ignored listing per whitelist May 2 11:35:35.063: INFO: namespace e2e-tests-svc-latency-bqxfx deletion completed in 34.083547135s • [SLOW TEST:57.869 seconds] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 2 11:35:35.063: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 2 11:35:35.174: INFO: (0) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 6.24053ms) May 2 11:35:35.177: INFO: (1) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.047728ms) May 2 11:35:35.181: INFO: (2) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.560043ms) May 2 11:35:35.184: INFO: (3) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.716541ms) May 2 11:35:35.187: INFO: (4) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.383111ms) May 2 11:35:35.190: INFO: (5) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.301488ms) May 2 11:35:35.194: INFO: (6) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.717351ms) May 2 11:35:35.198: INFO: (7) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.766807ms) May 2 11:35:35.201: INFO: (8) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.256102ms) May 2 11:35:35.204: INFO: (9) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.868136ms) May 2 11:35:35.208: INFO: (10) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.351148ms) May 2 11:35:35.211: INFO: (11) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.356668ms) May 2 11:35:35.217: INFO: (12) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 6.27634ms) May 2 11:35:35.220: INFO: (13) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.14442ms) May 2 11:35:35.236: INFO: (14) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 15.782027ms) May 2 11:35:35.239: INFO: (15) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.419534ms) May 2 11:35:35.242: INFO: (16) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.992927ms) May 2 11:35:35.245: INFO: (17) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.051503ms) May 2 11:35:35.248: INFO: (18) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.976161ms) May 2 11:35:35.251: INFO: (19) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.195766ms) [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 2 11:35:35.251: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-proxy-6wz47" for this suite. May 2 11:35:41.268: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 11:35:41.315: INFO: namespace: e2e-tests-proxy-6wz47, resource: bindings, ignored listing per whitelist May 2 11:35:41.346: INFO: namespace e2e-tests-proxy-6wz47 deletion completed in 6.091817648s • [SLOW TEST:6.283 seconds] [sig-network] Proxy /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:56 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 2 11:35:41.346: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin May 2 11:35:41.446: INFO: Waiting up to 5m0s for pod "downwardapi-volume-0d818439-8c69-11ea-8045-0242ac110017" in namespace "e2e-tests-downward-api-2qb47" to be "success or failure" May 2 11:35:41.456: INFO: Pod "downwardapi-volume-0d818439-8c69-11ea-8045-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 9.569009ms May 2 11:35:43.482: INFO: Pod "downwardapi-volume-0d818439-8c69-11ea-8045-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.036222783s May 2 11:35:45.486: INFO: Pod "downwardapi-volume-0d818439-8c69-11ea-8045-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.04037269s STEP: Saw pod success May 2 11:35:45.486: INFO: Pod "downwardapi-volume-0d818439-8c69-11ea-8045-0242ac110017" satisfied condition "success or failure" May 2 11:35:45.490: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-0d818439-8c69-11ea-8045-0242ac110017 container client-container: STEP: delete the pod May 2 11:35:45.511: INFO: Waiting for pod downwardapi-volume-0d818439-8c69-11ea-8045-0242ac110017 to disappear May 2 11:35:45.566: INFO: Pod downwardapi-volume-0d818439-8c69-11ea-8045-0242ac110017 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 2 11:35:45.566: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-2qb47" for this suite. May 2 11:35:51.586: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 11:35:51.650: INFO: namespace: e2e-tests-downward-api-2qb47, resource: bindings, ignored listing per whitelist May 2 11:35:51.667: INFO: namespace e2e-tests-downward-api-2qb47 deletion completed in 6.097093245s • [SLOW TEST:10.321 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-api-machinery] CustomResourceDefinition resources Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 2 11:35:51.667: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 2 11:35:51.781: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 2 11:35:52.848: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-custom-resource-definition-2hbjs" for this suite. May 2 11:35:58.883: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 11:35:58.933: INFO: namespace: e2e-tests-custom-resource-definition-2hbjs, resource: bindings, ignored listing per whitelist May 2 11:35:58.947: INFO: namespace e2e-tests-custom-resource-definition-2hbjs deletion completed in 6.095461395s • [SLOW TEST:7.280 seconds] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 Simple CustomResourceDefinition /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:35 creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 2 11:35:58.947: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should write entries to /etc/hosts [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 2 11:36:03.099: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubelet-test-khp2v" for this suite. May 2 11:36:55.142: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 11:36:55.212: INFO: namespace: e2e-tests-kubelet-test-khp2v, resource: bindings, ignored listing per whitelist May 2 11:36:55.227: INFO: namespace e2e-tests-kubelet-test-khp2v deletion completed in 52.097054934s • [SLOW TEST:56.279 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when scheduling a busybox Pod with hostAliases /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:136 should write entries to /etc/hosts [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 2 11:36:55.227: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin May 2 11:36:55.477: INFO: Waiting up to 5m0s for pod "downwardapi-volume-39a361fc-8c69-11ea-8045-0242ac110017" in namespace "e2e-tests-projected-gnn6g" to be "success or failure" May 2 11:36:55.487: INFO: Pod "downwardapi-volume-39a361fc-8c69-11ea-8045-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 10.188761ms May 2 11:36:57.490: INFO: Pod "downwardapi-volume-39a361fc-8c69-11ea-8045-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013533399s May 2 11:36:59.494: INFO: Pod "downwardapi-volume-39a361fc-8c69-11ea-8045-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.017541503s STEP: Saw pod success May 2 11:36:59.494: INFO: Pod "downwardapi-volume-39a361fc-8c69-11ea-8045-0242ac110017" satisfied condition "success or failure" May 2 11:36:59.497: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-39a361fc-8c69-11ea-8045-0242ac110017 container client-container: STEP: delete the pod May 2 11:36:59.555: INFO: Waiting for pod downwardapi-volume-39a361fc-8c69-11ea-8045-0242ac110017 to disappear May 2 11:36:59.639: INFO: Pod downwardapi-volume-39a361fc-8c69-11ea-8045-0242ac110017 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 2 11:36:59.639: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-gnn6g" for this suite. May 2 11:37:05.974: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 11:37:05.993: INFO: namespace: e2e-tests-projected-gnn6g, resource: bindings, ignored listing per whitelist May 2 11:37:06.045: INFO: namespace e2e-tests-projected-gnn6g deletion completed in 6.402107304s • [SLOW TEST:10.818 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 2 11:37:06.045: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: retrieving the pod May 2 11:37:10.202: INFO: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:send-events-4003c52d-8c69-11ea-8045-0242ac110017,GenerateName:,Namespace:e2e-tests-events-84524,SelfLink:/api/v1/namespaces/e2e-tests-events-84524/pods/send-events-4003c52d-8c69-11ea-8045-0242ac110017,UID:4006ce45-8c69-11ea-99e8-0242ac110002,ResourceVersion:8342114,Generation:0,CreationTimestamp:2020-05-02 11:37:06 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: foo,time: 166519202,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-jsfq4 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-jsfq4,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{p gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 [] [] [{ 0 80 TCP }] [] [] {map[] map[]} [{default-token-jsfq4 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001a03950} {node.kubernetes.io/unreachable Exists NoExecute 0xc001a03970}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-02 11:37:06 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-02 11:37:09 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-02 11:37:09 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-02 11:37:06 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.3,PodIP:10.244.1.185,StartTime:2020-05-02 11:37:06 +0000 UTC,ContainerStatuses:[{p {nil ContainerStateRunning{StartedAt:2020-05-02 11:37:08 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 gcr.io/kubernetes-e2e-test-images/serve-hostname@sha256:bab70473a6d8ef65a22625dc9a1b0f0452e811530fdbe77e4408523460177ff1 containerd://46b8403b347f6d20c4f0706674780a181d1255a8afdf7bc93ff751fa5b94c23e}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} STEP: checking for scheduler event about the pod May 2 11:37:12.206: INFO: Saw scheduler event for our pod. STEP: checking for kubelet event about the pod May 2 11:37:14.210: INFO: Saw kubelet event for our pod. STEP: deleting the pod [AfterEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 2 11:37:14.215: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-events-84524" for this suite. May 2 11:37:52.398: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 11:37:52.469: INFO: namespace: e2e-tests-events-84524, resource: bindings, ignored listing per whitelist May 2 11:37:52.478: INFO: namespace e2e-tests-events-84524 deletion completed in 38.236166123s • [SLOW TEST:46.433 seconds] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 2 11:37:52.479: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 2 11:37:52.575: INFO: (0) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 3.600012ms) May 2 11:37:52.578: INFO: (1) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 3.354236ms) May 2 11:37:52.581: INFO: (2) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 3.203836ms) May 2 11:37:52.584: INFO: (3) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 2.945141ms) May 2 11:37:52.588: INFO: (4) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 3.57179ms) May 2 11:37:52.592: INFO: (5) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 3.717966ms) May 2 11:37:52.595: INFO: (6) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 3.315222ms) May 2 11:37:52.599: INFO: (7) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 3.587182ms) May 2 11:37:52.602: INFO: (8) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 3.387008ms) May 2 11:37:52.606: INFO: (9) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 3.696615ms) May 2 11:37:52.609: INFO: (10) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 3.73902ms) May 2 11:37:52.613: INFO: (11) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 3.908014ms) May 2 11:37:52.647: INFO: (12) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 33.228177ms) May 2 11:37:52.650: INFO: (13) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 3.704346ms) May 2 11:37:52.654: INFO: (14) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 3.623292ms) May 2 11:37:52.658: INFO: (15) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 3.733562ms) May 2 11:37:52.661: INFO: (16) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 3.619761ms) May 2 11:37:52.665: INFO: (17) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 3.728761ms) May 2 11:37:52.668: INFO: (18) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 3.142844ms) May 2 11:37:52.671: INFO: (19) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 2.941125ms) [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 2 11:37:52.672: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-proxy-kvnsx" for this suite. May 2 11:38:00.699: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 11:38:00.706: INFO: namespace: e2e-tests-proxy-kvnsx, resource: bindings, ignored listing per whitelist May 2 11:38:00.773: INFO: namespace e2e-tests-proxy-kvnsx deletion completed in 8.097782174s • [SLOW TEST:8.294 seconds] [sig-network] Proxy /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:56 should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 2 11:38:00.773: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-map-60e6a63e-8c69-11ea-8045-0242ac110017 STEP: Creating a pod to test consume secrets May 2 11:38:01.417: INFO: Waiting up to 5m0s for pod "pod-secrets-60ed989a-8c69-11ea-8045-0242ac110017" in namespace "e2e-tests-secrets-kmct9" to be "success or failure" May 2 11:38:01.868: INFO: Pod "pod-secrets-60ed989a-8c69-11ea-8045-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 451.297648ms May 2 11:38:04.466: INFO: Pod "pod-secrets-60ed989a-8c69-11ea-8045-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 3.049742105s May 2 11:38:06.471: INFO: Pod "pod-secrets-60ed989a-8c69-11ea-8045-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 5.053976675s May 2 11:38:08.544: INFO: Pod "pod-secrets-60ed989a-8c69-11ea-8045-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 7.126982084s May 2 11:38:10.548: INFO: Pod "pod-secrets-60ed989a-8c69-11ea-8045-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 9.131311643s STEP: Saw pod success May 2 11:38:10.548: INFO: Pod "pod-secrets-60ed989a-8c69-11ea-8045-0242ac110017" satisfied condition "success or failure" May 2 11:38:10.551: INFO: Trying to get logs from node hunter-worker2 pod pod-secrets-60ed989a-8c69-11ea-8045-0242ac110017 container secret-volume-test: STEP: delete the pod May 2 11:38:10.568: INFO: Waiting for pod pod-secrets-60ed989a-8c69-11ea-8045-0242ac110017 to disappear May 2 11:38:10.572: INFO: Pod pod-secrets-60ed989a-8c69-11ea-8045-0242ac110017 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 2 11:38:10.573: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-kmct9" for this suite. May 2 11:38:16.590: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 11:38:16.671: INFO: namespace: e2e-tests-secrets-kmct9, resource: bindings, ignored listing per whitelist May 2 11:38:16.678: INFO: namespace e2e-tests-secrets-kmct9 deletion completed in 6.102632381s • [SLOW TEST:15.905 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 2 11:38:16.678: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-6a1cff8c-8c69-11ea-8045-0242ac110017 STEP: Creating a pod to test consume configMaps May 2 11:38:16.825: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-6a1db5f3-8c69-11ea-8045-0242ac110017" in namespace "e2e-tests-projected-bmfzd" to be "success or failure" May 2 11:38:16.836: INFO: Pod "pod-projected-configmaps-6a1db5f3-8c69-11ea-8045-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 11.057391ms May 2 11:38:18.841: INFO: Pod "pod-projected-configmaps-6a1db5f3-8c69-11ea-8045-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015136737s May 2 11:38:20.845: INFO: Pod "pod-projected-configmaps-6a1db5f3-8c69-11ea-8045-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.019460358s STEP: Saw pod success May 2 11:38:20.845: INFO: Pod "pod-projected-configmaps-6a1db5f3-8c69-11ea-8045-0242ac110017" satisfied condition "success or failure" May 2 11:38:20.848: INFO: Trying to get logs from node hunter-worker pod pod-projected-configmaps-6a1db5f3-8c69-11ea-8045-0242ac110017 container projected-configmap-volume-test: STEP: delete the pod May 2 11:38:20.884: INFO: Waiting for pod pod-projected-configmaps-6a1db5f3-8c69-11ea-8045-0242ac110017 to disappear May 2 11:38:20.891: INFO: Pod pod-projected-configmaps-6a1db5f3-8c69-11ea-8045-0242ac110017 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 2 11:38:20.891: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-bmfzd" for this suite. May 2 11:38:26.907: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 11:38:26.978: INFO: namespace: e2e-tests-projected-bmfzd, resource: bindings, ignored listing per whitelist May 2 11:38:26.982: INFO: namespace e2e-tests-projected-bmfzd deletion completed in 6.084407766s • [SLOW TEST:10.304 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 2 11:38:26.983: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 2 11:38:31.137: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubelet-test-8xw75" for this suite. May 2 11:39:13.155: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 11:39:13.222: INFO: namespace: e2e-tests-kubelet-test-8xw75, resource: bindings, ignored listing per whitelist May 2 11:39:13.228: INFO: namespace e2e-tests-kubelet-test-8xw75 deletion completed in 42.087936686s • [SLOW TEST:46.245 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when scheduling a busybox command in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:40 should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 2 11:39:13.228: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a new configmap STEP: modifying the configmap once STEP: modifying the configmap a second time STEP: deleting the configmap STEP: creating a watch on configmaps from the resource version returned by the first update STEP: Expecting to observe notifications for all changes to the configmap after the first update May 2 11:39:13.711: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:e2e-tests-watch-ctgz9,SelfLink:/api/v1/namespaces/e2e-tests-watch-ctgz9/configmaps/e2e-watch-test-resource-version,UID:8bf27b37-8c69-11ea-99e8-0242ac110002,ResourceVersion:8342456,Generation:0,CreationTimestamp:2020-05-02 11:39:13 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} May 2 11:39:13.711: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:e2e-tests-watch-ctgz9,SelfLink:/api/v1/namespaces/e2e-tests-watch-ctgz9/configmaps/e2e-watch-test-resource-version,UID:8bf27b37-8c69-11ea-99e8-0242ac110002,ResourceVersion:8342458,Generation:0,CreationTimestamp:2020-05-02 11:39:13 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 2 11:39:13.711: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-watch-ctgz9" for this suite. May 2 11:39:19.786: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 11:39:19.848: INFO: namespace: e2e-tests-watch-ctgz9, resource: bindings, ignored listing per whitelist May 2 11:39:19.861: INFO: namespace e2e-tests-watch-ctgz9 deletion completed in 6.108069068s • [SLOW TEST:6.632 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 2 11:39:19.861: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with secret that has name projected-secret-test-map-8fbf052e-8c69-11ea-8045-0242ac110017 STEP: Creating a pod to test consume secrets May 2 11:39:19.995: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-8fc758bd-8c69-11ea-8045-0242ac110017" in namespace "e2e-tests-projected-hrdbt" to be "success or failure" May 2 11:39:19.998: INFO: Pod "pod-projected-secrets-8fc758bd-8c69-11ea-8045-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 3.282011ms May 2 11:39:22.091: INFO: Pod "pod-projected-secrets-8fc758bd-8c69-11ea-8045-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.096678526s May 2 11:39:24.096: INFO: Pod "pod-projected-secrets-8fc758bd-8c69-11ea-8045-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.101149268s STEP: Saw pod success May 2 11:39:24.096: INFO: Pod "pod-projected-secrets-8fc758bd-8c69-11ea-8045-0242ac110017" satisfied condition "success or failure" May 2 11:39:24.099: INFO: Trying to get logs from node hunter-worker pod pod-projected-secrets-8fc758bd-8c69-11ea-8045-0242ac110017 container projected-secret-volume-test: STEP: delete the pod May 2 11:39:24.133: INFO: Waiting for pod pod-projected-secrets-8fc758bd-8c69-11ea-8045-0242ac110017 to disappear May 2 11:39:24.149: INFO: Pod pod-projected-secrets-8fc758bd-8c69-11ea-8045-0242ac110017 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 2 11:39:24.149: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-hrdbt" for this suite. May 2 11:39:30.186: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 11:39:30.266: INFO: namespace: e2e-tests-projected-hrdbt, resource: bindings, ignored listing per whitelist May 2 11:39:30.305: INFO: namespace e2e-tests-projected-hrdbt deletion completed in 6.152568825s • [SLOW TEST:10.445 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl api-versions should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 2 11:39:30.306: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: validating api versions May 2 11:39:30.385: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config api-versions' May 2 11:39:30.563: INFO: stderr: "" May 2 11:39:30.563: INFO: stdout: "admissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\napps/v1beta1\napps/v1beta2\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 2 11:39:30.563: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-nqfdl" for this suite. May 2 11:39:36.585: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 11:39:36.599: INFO: namespace: e2e-tests-kubectl-nqfdl, resource: bindings, ignored listing per whitelist May 2 11:39:36.666: INFO: namespace e2e-tests-kubectl-nqfdl deletion completed in 6.098289516s • [SLOW TEST:6.360 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl api-versions /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 2 11:39:36.666: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test override command May 2 11:39:36.776: INFO: Waiting up to 5m0s for pod "client-containers-99c623bd-8c69-11ea-8045-0242ac110017" in namespace "e2e-tests-containers-gbxkt" to be "success or failure" May 2 11:39:36.779: INFO: Pod "client-containers-99c623bd-8c69-11ea-8045-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.978712ms May 2 11:39:38.850: INFO: Pod "client-containers-99c623bd-8c69-11ea-8045-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.073892443s May 2 11:39:40.854: INFO: Pod "client-containers-99c623bd-8c69-11ea-8045-0242ac110017": Phase="Running", Reason="", readiness=true. Elapsed: 4.078109769s May 2 11:39:42.858: INFO: Pod "client-containers-99c623bd-8c69-11ea-8045-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.082501392s STEP: Saw pod success May 2 11:39:42.858: INFO: Pod "client-containers-99c623bd-8c69-11ea-8045-0242ac110017" satisfied condition "success or failure" May 2 11:39:42.861: INFO: Trying to get logs from node hunter-worker pod client-containers-99c623bd-8c69-11ea-8045-0242ac110017 container test-container: STEP: delete the pod May 2 11:39:42.996: INFO: Waiting for pod client-containers-99c623bd-8c69-11ea-8045-0242ac110017 to disappear May 2 11:39:43.192: INFO: Pod client-containers-99c623bd-8c69-11ea-8045-0242ac110017 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 2 11:39:43.192: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-containers-gbxkt" for this suite. May 2 11:39:49.394: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 11:39:49.427: INFO: namespace: e2e-tests-containers-gbxkt, resource: bindings, ignored listing per whitelist May 2 11:39:49.470: INFO: namespace e2e-tests-containers-gbxkt deletion completed in 6.273509307s • [SLOW TEST:12.804 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 2 11:39:49.471: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-map-a171b92b-8c69-11ea-8045-0242ac110017 STEP: Creating a pod to test consume configMaps May 2 11:39:49.842: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-a17a1677-8c69-11ea-8045-0242ac110017" in namespace "e2e-tests-projected-p55b9" to be "success or failure" May 2 11:39:49.924: INFO: Pod "pod-projected-configmaps-a17a1677-8c69-11ea-8045-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 81.863437ms May 2 11:39:51.940: INFO: Pod "pod-projected-configmaps-a17a1677-8c69-11ea-8045-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.098263904s May 2 11:39:53.945: INFO: Pod "pod-projected-configmaps-a17a1677-8c69-11ea-8045-0242ac110017": Phase="Running", Reason="", readiness=true. Elapsed: 4.102885678s May 2 11:39:55.995: INFO: Pod "pod-projected-configmaps-a17a1677-8c69-11ea-8045-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.15283694s STEP: Saw pod success May 2 11:39:55.995: INFO: Pod "pod-projected-configmaps-a17a1677-8c69-11ea-8045-0242ac110017" satisfied condition "success or failure" May 2 11:39:55.999: INFO: Trying to get logs from node hunter-worker pod pod-projected-configmaps-a17a1677-8c69-11ea-8045-0242ac110017 container projected-configmap-volume-test: STEP: delete the pod May 2 11:39:56.038: INFO: Waiting for pod pod-projected-configmaps-a17a1677-8c69-11ea-8045-0242ac110017 to disappear May 2 11:39:56.054: INFO: Pod pod-projected-configmaps-a17a1677-8c69-11ea-8045-0242ac110017 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 2 11:39:56.054: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-p55b9" for this suite. May 2 11:40:02.082: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 11:40:02.123: INFO: namespace: e2e-tests-projected-p55b9, resource: bindings, ignored listing per whitelist May 2 11:40:02.160: INFO: namespace e2e-tests-projected-p55b9 deletion completed in 6.101414514s • [SLOW TEST:12.689 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 2 11:40:02.160: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-map-a90307a7-8c69-11ea-8045-0242ac110017 STEP: Creating a pod to test consume configMaps May 2 11:40:02.368: INFO: Waiting up to 5m0s for pod "pod-configmaps-a90802e6-8c69-11ea-8045-0242ac110017" in namespace "e2e-tests-configmap-rdhcc" to be "success or failure" May 2 11:40:02.378: INFO: Pod "pod-configmaps-a90802e6-8c69-11ea-8045-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 10.059711ms May 2 11:40:04.381: INFO: Pod "pod-configmaps-a90802e6-8c69-11ea-8045-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013343357s May 2 11:40:06.385: INFO: Pod "pod-configmaps-a90802e6-8c69-11ea-8045-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.016698984s STEP: Saw pod success May 2 11:40:06.385: INFO: Pod "pod-configmaps-a90802e6-8c69-11ea-8045-0242ac110017" satisfied condition "success or failure" May 2 11:40:06.387: INFO: Trying to get logs from node hunter-worker pod pod-configmaps-a90802e6-8c69-11ea-8045-0242ac110017 container configmap-volume-test: STEP: delete the pod May 2 11:40:06.428: INFO: Waiting for pod pod-configmaps-a90802e6-8c69-11ea-8045-0242ac110017 to disappear May 2 11:40:06.576: INFO: Pod pod-configmaps-a90802e6-8c69-11ea-8045-0242ac110017 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 2 11:40:06.576: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-rdhcc" for this suite. May 2 11:40:12.819: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 11:40:12.913: INFO: namespace: e2e-tests-configmap-rdhcc, resource: bindings, ignored listing per whitelist May 2 11:40:12.954: INFO: namespace e2e-tests-configmap-rdhcc deletion completed in 6.374348503s • [SLOW TEST:10.794 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 2 11:40:12.954: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0666 on node default medium May 2 11:40:13.078: INFO: Waiting up to 5m0s for pod "pod-af6b3e3e-8c69-11ea-8045-0242ac110017" in namespace "e2e-tests-emptydir-sbkbs" to be "success or failure" May 2 11:40:13.095: INFO: Pod "pod-af6b3e3e-8c69-11ea-8045-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 16.427029ms May 2 11:40:15.099: INFO: Pod "pod-af6b3e3e-8c69-11ea-8045-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020599534s May 2 11:40:17.103: INFO: Pod "pod-af6b3e3e-8c69-11ea-8045-0242ac110017": Phase="Running", Reason="", readiness=true. Elapsed: 4.024951222s May 2 11:40:19.108: INFO: Pod "pod-af6b3e3e-8c69-11ea-8045-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.029409457s STEP: Saw pod success May 2 11:40:19.108: INFO: Pod "pod-af6b3e3e-8c69-11ea-8045-0242ac110017" satisfied condition "success or failure" May 2 11:40:19.111: INFO: Trying to get logs from node hunter-worker pod pod-af6b3e3e-8c69-11ea-8045-0242ac110017 container test-container: STEP: delete the pod May 2 11:40:19.170: INFO: Waiting for pod pod-af6b3e3e-8c69-11ea-8045-0242ac110017 to disappear May 2 11:40:19.178: INFO: Pod pod-af6b3e3e-8c69-11ea-8045-0242ac110017 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 2 11:40:19.178: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-sbkbs" for this suite. May 2 11:40:25.193: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 11:40:25.228: INFO: namespace: e2e-tests-emptydir-sbkbs, resource: bindings, ignored listing per whitelist May 2 11:40:25.259: INFO: namespace e2e-tests-emptydir-sbkbs deletion completed in 6.077671491s • [SLOW TEST:12.305 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 2 11:40:25.260: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0644 on tmpfs May 2 11:40:25.404: INFO: Waiting up to 5m0s for pod "pod-b6c42db9-8c69-11ea-8045-0242ac110017" in namespace "e2e-tests-emptydir-28r4h" to be "success or failure" May 2 11:40:25.408: INFO: Pod "pod-b6c42db9-8c69-11ea-8045-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 4.575075ms May 2 11:40:27.420: INFO: Pod "pod-b6c42db9-8c69-11ea-8045-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015812809s May 2 11:40:29.423: INFO: Pod "pod-b6c42db9-8c69-11ea-8045-0242ac110017": Phase="Running", Reason="", readiness=true. Elapsed: 4.019285287s May 2 11:40:31.427: INFO: Pod "pod-b6c42db9-8c69-11ea-8045-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.023294887s STEP: Saw pod success May 2 11:40:31.427: INFO: Pod "pod-b6c42db9-8c69-11ea-8045-0242ac110017" satisfied condition "success or failure" May 2 11:40:31.430: INFO: Trying to get logs from node hunter-worker2 pod pod-b6c42db9-8c69-11ea-8045-0242ac110017 container test-container: STEP: delete the pod May 2 11:40:31.452: INFO: Waiting for pod pod-b6c42db9-8c69-11ea-8045-0242ac110017 to disappear May 2 11:40:31.540: INFO: Pod pod-b6c42db9-8c69-11ea-8045-0242ac110017 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 2 11:40:31.540: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-28r4h" for this suite. May 2 11:40:37.555: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 11:40:37.570: INFO: namespace: e2e-tests-emptydir-28r4h, resource: bindings, ignored listing per whitelist May 2 11:40:37.625: INFO: namespace e2e-tests-emptydir-28r4h deletion completed in 6.081459642s • [SLOW TEST:12.366 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 2 11:40:37.626: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 2 11:40:37.701: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version --client' May 2 11:40:37.777: INFO: stderr: "" May 2 11:40:37.777: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"13\", GitVersion:\"v1.13.12\", GitCommit:\"a8b52209ee172232b6db7a6e0ce2adc77458829f\", GitTreeState:\"clean\", BuildDate:\"2020-05-01T17:08:34Z\", GoVersion:\"go1.11.13\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" May 2 11:40:37.779: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-ttjsl' May 2 11:40:41.184: INFO: stderr: "" May 2 11:40:41.184: INFO: stdout: "replicationcontroller/redis-master created\n" May 2 11:40:41.184: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-ttjsl' May 2 11:40:41.480: INFO: stderr: "" May 2 11:40:41.480: INFO: stdout: "service/redis-master created\n" STEP: Waiting for Redis master to start. May 2 11:40:42.484: INFO: Selector matched 1 pods for map[app:redis] May 2 11:40:42.484: INFO: Found 0 / 1 May 2 11:40:43.677: INFO: Selector matched 1 pods for map[app:redis] May 2 11:40:43.677: INFO: Found 0 / 1 May 2 11:40:44.484: INFO: Selector matched 1 pods for map[app:redis] May 2 11:40:44.484: INFO: Found 0 / 1 May 2 11:40:45.649: INFO: Selector matched 1 pods for map[app:redis] May 2 11:40:45.649: INFO: Found 0 / 1 May 2 11:40:46.485: INFO: Selector matched 1 pods for map[app:redis] May 2 11:40:46.485: INFO: Found 1 / 1 May 2 11:40:46.485: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 May 2 11:40:46.489: INFO: Selector matched 1 pods for map[app:redis] May 2 11:40:46.490: INFO: ForEach: Found 1 pods from the filter. Now looping through them. May 2 11:40:46.490: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe pod redis-master-7kgfp --namespace=e2e-tests-kubectl-ttjsl' May 2 11:40:46.609: INFO: stderr: "" May 2 11:40:46.609: INFO: stdout: "Name: redis-master-7kgfp\nNamespace: e2e-tests-kubectl-ttjsl\nPriority: 0\nPriorityClassName: \nNode: hunter-worker2/172.17.0.4\nStart Time: Sat, 02 May 2020 11:40:41 +0000\nLabels: app=redis\n role=master\nAnnotations: \nStatus: Running\nIP: 10.244.2.205\nControlled By: ReplicationController/redis-master\nContainers:\n redis-master:\n Container ID: containerd://8fca5f1f1d8c4422fb403352393d86049055328ca82549d7e1494cd7fcb241db\n Image: gcr.io/kubernetes-e2e-test-images/redis:1.0\n Image ID: gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830\n Port: 6379/TCP\n Host Port: 0/TCP\n State: Running\n Started: Sat, 02 May 2020 11:40:45 +0000\n Ready: True\n Restart Count: 0\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from default-token-h8x82 (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n default-token-h8x82:\n Type: Secret (a volume populated by a Secret)\n SecretName: default-token-h8x82\n Optional: false\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute for 300s\n node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 5s default-scheduler Successfully assigned e2e-tests-kubectl-ttjsl/redis-master-7kgfp to hunter-worker2\n Normal Pulled 4s kubelet, hunter-worker2 Container image \"gcr.io/kubernetes-e2e-test-images/redis:1.0\" already present on machine\n Normal Created 1s kubelet, hunter-worker2 Created container\n Normal Started 1s kubelet, hunter-worker2 Started container\n" May 2 11:40:46.609: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe rc redis-master --namespace=e2e-tests-kubectl-ttjsl' May 2 11:40:46.741: INFO: stderr: "" May 2 11:40:46.741: INFO: stdout: "Name: redis-master\nNamespace: e2e-tests-kubectl-ttjsl\nSelector: app=redis,role=master\nLabels: app=redis\n role=master\nAnnotations: \nReplicas: 1 current / 1 desired\nPods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: app=redis\n role=master\n Containers:\n redis-master:\n Image: gcr.io/kubernetes-e2e-test-images/redis:1.0\n Port: 6379/TCP\n Host Port: 0/TCP\n Environment: \n Mounts: \n Volumes: \nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 5s replication-controller Created pod: redis-master-7kgfp\n" May 2 11:40:46.742: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe service redis-master --namespace=e2e-tests-kubectl-ttjsl' May 2 11:40:46.866: INFO: stderr: "" May 2 11:40:46.866: INFO: stdout: "Name: redis-master\nNamespace: e2e-tests-kubectl-ttjsl\nLabels: app=redis\n role=master\nAnnotations: \nSelector: app=redis,role=master\nType: ClusterIP\nIP: 10.98.153.143\nPort: 6379/TCP\nTargetPort: redis-server/TCP\nEndpoints: 10.244.2.205:6379\nSession Affinity: None\nEvents: \n" May 2 11:40:46.870: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe node hunter-control-plane' May 2 11:40:46.998: INFO: stderr: "" May 2 11:40:46.998: INFO: stdout: "Name: hunter-control-plane\nRoles: master\nLabels: beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/os=linux\n kubernetes.io/hostname=hunter-control-plane\n node-role.kubernetes.io/master=\nAnnotations: kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock\n node.alpha.kubernetes.io/ttl: 0\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Sun, 15 Mar 2020 18:22:50 +0000\nTaints: node-role.kubernetes.io/master:NoSchedule\nUnschedulable: false\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n MemoryPressure False Sat, 02 May 2020 11:40:38 +0000 Sun, 15 Mar 2020 18:22:49 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Sat, 02 May 2020 11:40:38 +0000 Sun, 15 Mar 2020 18:22:49 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Sat, 02 May 2020 11:40:38 +0000 Sun, 15 Mar 2020 18:22:49 +0000 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Sat, 02 May 2020 11:40:38 +0000 Sun, 15 Mar 2020 18:23:41 +0000 KubeletReady kubelet is posting ready status\nAddresses:\n InternalIP: 172.17.0.2\n Hostname: hunter-control-plane\nCapacity:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759892Ki\n pods: 110\nAllocatable:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759892Ki\n pods: 110\nSystem Info:\n Machine ID: 3c4716968dac483293a23c2100ad64a5\n System UUID: 683417f7-64ca-431d-b8ac-22e73b26255e\n Boot ID: ca2aa731-f890-4956-92a1-ff8c7560d571\n Kernel Version: 4.15.0-88-generic\n OS Image: Ubuntu 19.10\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: containerd://1.3.2\n Kubelet Version: v1.13.12\n Kube-Proxy Version: v1.13.12\nPodCIDR: 10.244.0.0/24\nNon-terminated Pods: (7 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE\n --------- ---- ------------ ---------- --------------- ------------- ---\n kube-system etcd-hunter-control-plane 0 (0%) 0 (0%) 0 (0%) 0 (0%) 47d\n kube-system kindnet-l2xm6 100m (0%) 100m (0%) 50Mi (0%) 50Mi (0%) 47d\n kube-system kube-apiserver-hunter-control-plane 250m (1%) 0 (0%) 0 (0%) 0 (0%) 47d\n kube-system kube-controller-manager-hunter-control-plane 200m (1%) 0 (0%) 0 (0%) 0 (0%) 47d\n kube-system kube-proxy-mmppc 0 (0%) 0 (0%) 0 (0%) 0 (0%) 47d\n kube-system kube-scheduler-hunter-control-plane 100m (0%) 0 (0%) 0 (0%) 0 (0%) 47d\n local-path-storage local-path-provisioner-77cfdd744c-q47vg 0 (0%) 0 (0%) 0 (0%) 0 (0%) 47d\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 650m (4%) 100m (0%)\n memory 50Mi (0%) 50Mi (0%)\n ephemeral-storage 0 (0%) 0 (0%)\nEvents: \n" May 2 11:40:46.998: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe namespace e2e-tests-kubectl-ttjsl' May 2 11:40:47.111: INFO: stderr: "" May 2 11:40:47.111: INFO: stdout: "Name: e2e-tests-kubectl-ttjsl\nLabels: e2e-framework=kubectl\n e2e-run=3666bfb6-8c62-11ea-8045-0242ac110017\nAnnotations: \nStatus: Active\n\nNo resource quota.\n\nNo resource limits.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 2 11:40:47.111: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-ttjsl" for this suite. May 2 11:41:11.134: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 11:41:11.148: INFO: namespace: e2e-tests-kubectl-ttjsl, resource: bindings, ignored listing per whitelist May 2 11:41:11.204: INFO: namespace e2e-tests-kubectl-ttjsl deletion completed in 24.089428688s • [SLOW TEST:33.578 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl describe /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 2 11:41:11.204: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 2 11:42:11.349: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-4kh46" for this suite. May 2 11:42:33.375: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 11:42:33.404: INFO: namespace: e2e-tests-container-probe-4kh46, resource: bindings, ignored listing per whitelist May 2 11:42:33.456: INFO: namespace e2e-tests-container-probe-4kh46 deletion completed in 22.103420485s • [SLOW TEST:82.252 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 2 11:42:33.456: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-map-0328329f-8c6a-11ea-8045-0242ac110017 STEP: Creating a pod to test consume secrets May 2 11:42:33.575: INFO: Waiting up to 5m0s for pod "pod-secrets-03293e61-8c6a-11ea-8045-0242ac110017" in namespace "e2e-tests-secrets-2rwqx" to be "success or failure" May 2 11:42:33.579: INFO: Pod "pod-secrets-03293e61-8c6a-11ea-8045-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 3.653076ms May 2 11:42:35.583: INFO: Pod "pod-secrets-03293e61-8c6a-11ea-8045-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007870518s May 2 11:42:37.587: INFO: Pod "pod-secrets-03293e61-8c6a-11ea-8045-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011961092s STEP: Saw pod success May 2 11:42:37.587: INFO: Pod "pod-secrets-03293e61-8c6a-11ea-8045-0242ac110017" satisfied condition "success or failure" May 2 11:42:37.590: INFO: Trying to get logs from node hunter-worker2 pod pod-secrets-03293e61-8c6a-11ea-8045-0242ac110017 container secret-volume-test: STEP: delete the pod May 2 11:42:37.736: INFO: Waiting for pod pod-secrets-03293e61-8c6a-11ea-8045-0242ac110017 to disappear May 2 11:42:37.789: INFO: Pod pod-secrets-03293e61-8c6a-11ea-8045-0242ac110017 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 2 11:42:37.789: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-2rwqx" for this suite. May 2 11:42:43.948: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 11:42:44.021: INFO: namespace: e2e-tests-secrets-2rwqx, resource: bindings, ignored listing per whitelist May 2 11:42:44.026: INFO: namespace e2e-tests-secrets-2rwqx deletion completed in 6.233849961s • [SLOW TEST:10.570 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 2 11:42:44.027: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted May 2 11:42:50.396: INFO: 10 pods remaining May 2 11:42:50.396: INFO: 10 pods has nil DeletionTimestamp May 2 11:42:50.396: INFO: May 2 11:42:51.480: INFO: 5 pods remaining May 2 11:42:51.480: INFO: 0 pods has nil DeletionTimestamp May 2 11:42:51.480: INFO: May 2 11:42:52.859: INFO: 0 pods remaining May 2 11:42:52.859: INFO: 0 pods has nil DeletionTimestamp May 2 11:42:52.859: INFO: May 2 11:42:53.963: INFO: 0 pods remaining May 2 11:42:53.963: INFO: 0 pods has nil DeletionTimestamp May 2 11:42:53.963: INFO: STEP: Gathering metrics W0502 11:42:54.405511 6 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 2 11:42:54.405: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 2 11:42:54.405: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-gvc7z" for this suite. May 2 11:43:00.653: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 11:43:00.734: INFO: namespace: e2e-tests-gc-gvc7z, resource: bindings, ignored listing per whitelist May 2 11:43:00.792: INFO: namespace e2e-tests-gc-gvc7z deletion completed in 6.383782961s • [SLOW TEST:16.766 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 2 11:43:00.793: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin May 2 11:43:00.928: INFO: Waiting up to 5m0s for pod "downwardapi-volume-136f7a8a-8c6a-11ea-8045-0242ac110017" in namespace "e2e-tests-downward-api-5pjjn" to be "success or failure" May 2 11:43:00.937: INFO: Pod "downwardapi-volume-136f7a8a-8c6a-11ea-8045-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 8.7886ms May 2 11:43:03.080: INFO: Pod "downwardapi-volume-136f7a8a-8c6a-11ea-8045-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.15204918s May 2 11:43:05.085: INFO: Pod "downwardapi-volume-136f7a8a-8c6a-11ea-8045-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.157320112s STEP: Saw pod success May 2 11:43:05.086: INFO: Pod "downwardapi-volume-136f7a8a-8c6a-11ea-8045-0242ac110017" satisfied condition "success or failure" May 2 11:43:05.089: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-136f7a8a-8c6a-11ea-8045-0242ac110017 container client-container: STEP: delete the pod May 2 11:43:05.237: INFO: Waiting for pod downwardapi-volume-136f7a8a-8c6a-11ea-8045-0242ac110017 to disappear May 2 11:43:05.320: INFO: Pod downwardapi-volume-136f7a8a-8c6a-11ea-8045-0242ac110017 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 2 11:43:05.320: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-5pjjn" for this suite. May 2 11:43:11.492: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 11:43:11.539: INFO: namespace: e2e-tests-downward-api-5pjjn, resource: bindings, ignored listing per whitelist May 2 11:43:11.570: INFO: namespace e2e-tests-downward-api-5pjjn deletion completed in 6.24547145s • [SLOW TEST:10.777 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 2 11:43:11.570: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin May 2 11:43:11.689: INFO: Waiting up to 5m0s for pod "downwardapi-volume-19e17a80-8c6a-11ea-8045-0242ac110017" in namespace "e2e-tests-projected-br85q" to be "success or failure" May 2 11:43:11.733: INFO: Pod "downwardapi-volume-19e17a80-8c6a-11ea-8045-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 44.455165ms May 2 11:43:13.737: INFO: Pod "downwardapi-volume-19e17a80-8c6a-11ea-8045-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.048253033s May 2 11:43:15.741: INFO: Pod "downwardapi-volume-19e17a80-8c6a-11ea-8045-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.052347038s STEP: Saw pod success May 2 11:43:15.741: INFO: Pod "downwardapi-volume-19e17a80-8c6a-11ea-8045-0242ac110017" satisfied condition "success or failure" May 2 11:43:15.744: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-19e17a80-8c6a-11ea-8045-0242ac110017 container client-container: STEP: delete the pod May 2 11:43:15.783: INFO: Waiting for pod downwardapi-volume-19e17a80-8c6a-11ea-8045-0242ac110017 to disappear May 2 11:43:15.802: INFO: Pod downwardapi-volume-19e17a80-8c6a-11ea-8045-0242ac110017 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 2 11:43:15.802: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-br85q" for this suite. May 2 11:43:21.817: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 11:43:21.851: INFO: namespace: e2e-tests-projected-br85q, resource: bindings, ignored listing per whitelist May 2 11:43:21.891: INFO: namespace e2e-tests-projected-br85q deletion completed in 6.085903887s • [SLOW TEST:10.322 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 2 11:43:21.892: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin May 2 11:43:22.015: INFO: Waiting up to 5m0s for pod "downwardapi-volume-200850dd-8c6a-11ea-8045-0242ac110017" in namespace "e2e-tests-projected-ql4dc" to be "success or failure" May 2 11:43:22.044: INFO: Pod "downwardapi-volume-200850dd-8c6a-11ea-8045-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 28.626005ms May 2 11:43:24.048: INFO: Pod "downwardapi-volume-200850dd-8c6a-11ea-8045-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032636638s May 2 11:43:26.052: INFO: Pod "downwardapi-volume-200850dd-8c6a-11ea-8045-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.03713134s STEP: Saw pod success May 2 11:43:26.052: INFO: Pod "downwardapi-volume-200850dd-8c6a-11ea-8045-0242ac110017" satisfied condition "success or failure" May 2 11:43:26.056: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-200850dd-8c6a-11ea-8045-0242ac110017 container client-container: STEP: delete the pod May 2 11:43:26.098: INFO: Waiting for pod downwardapi-volume-200850dd-8c6a-11ea-8045-0242ac110017 to disappear May 2 11:43:26.115: INFO: Pod downwardapi-volume-200850dd-8c6a-11ea-8045-0242ac110017 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 2 11:43:26.115: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-ql4dc" for this suite. May 2 11:43:32.137: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 11:43:32.167: INFO: namespace: e2e-tests-projected-ql4dc, resource: bindings, ignored listing per whitelist May 2 11:43:32.239: INFO: namespace e2e-tests-projected-ql4dc deletion completed in 6.12017462s • [SLOW TEST:10.347 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 2 11:43:32.239: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating pod May 2 11:43:36.380: INFO: Pod pod-hostip-262ff0a3-8c6a-11ea-8045-0242ac110017 has hostIP: 172.17.0.4 [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 2 11:43:36.380: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-t6nrx" for this suite. May 2 11:43:58.418: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 11:43:58.536: INFO: namespace: e2e-tests-pods-t6nrx, resource: bindings, ignored listing per whitelist May 2 11:43:58.548: INFO: namespace e2e-tests-pods-t6nrx deletion completed in 22.165759016s • [SLOW TEST:26.309 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 2 11:43:58.549: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a service in the namespace STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there is no service in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 2 11:44:04.890: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-namespaces-mmtr7" for this suite. May 2 11:44:10.915: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 11:44:10.985: INFO: namespace: e2e-tests-namespaces-mmtr7, resource: bindings, ignored listing per whitelist May 2 11:44:10.994: INFO: namespace e2e-tests-namespaces-mmtr7 deletion completed in 6.100787609s STEP: Destroying namespace "e2e-tests-nsdeletetest-clm2r" for this suite. May 2 11:44:10.997: INFO: Namespace e2e-tests-nsdeletetest-clm2r was already deleted STEP: Destroying namespace "e2e-tests-nsdeletetest-2ntzk" for this suite. May 2 11:44:17.011: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 11:44:17.019: INFO: namespace: e2e-tests-nsdeletetest-2ntzk, resource: bindings, ignored listing per whitelist May 2 11:44:17.080: INFO: namespace e2e-tests-nsdeletetest-2ntzk deletion completed in 6.083342474s • [SLOW TEST:18.531 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] Downward API volume should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 2 11:44:17.080: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin May 2 11:44:17.287: INFO: Waiting up to 5m0s for pod "downwardapi-volume-40f6f366-8c6a-11ea-8045-0242ac110017" in namespace "e2e-tests-downward-api-xjj62" to be "success or failure" May 2 11:44:17.315: INFO: Pod "downwardapi-volume-40f6f366-8c6a-11ea-8045-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 27.637558ms May 2 11:44:19.319: INFO: Pod "downwardapi-volume-40f6f366-8c6a-11ea-8045-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031776501s May 2 11:44:21.323: INFO: Pod "downwardapi-volume-40f6f366-8c6a-11ea-8045-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.035849308s STEP: Saw pod success May 2 11:44:21.323: INFO: Pod "downwardapi-volume-40f6f366-8c6a-11ea-8045-0242ac110017" satisfied condition "success or failure" May 2 11:44:21.326: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-40f6f366-8c6a-11ea-8045-0242ac110017 container client-container: STEP: delete the pod May 2 11:44:21.371: INFO: Waiting for pod downwardapi-volume-40f6f366-8c6a-11ea-8045-0242ac110017 to disappear May 2 11:44:21.393: INFO: Pod downwardapi-volume-40f6f366-8c6a-11ea-8045-0242ac110017 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 2 11:44:21.393: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-xjj62" for this suite. May 2 11:44:27.414: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 11:44:27.493: INFO: namespace: e2e-tests-downward-api-xjj62, resource: bindings, ignored listing per whitelist May 2 11:44:27.524: INFO: namespace e2e-tests-downward-api-xjj62 deletion completed in 6.127087691s • [SLOW TEST:10.444 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 2 11:44:27.524: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod liveness-exec in namespace e2e-tests-container-probe-rsl7g May 2 11:44:31.660: INFO: Started pod liveness-exec in namespace e2e-tests-container-probe-rsl7g STEP: checking the pod's current state and verifying that restartCount is present May 2 11:44:31.663: INFO: Initial restart count of pod liveness-exec is 0 May 2 11:45:23.886: INFO: Restart count of pod e2e-tests-container-probe-rsl7g/liveness-exec is now 1 (52.222829479s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 2 11:45:23.913: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-rsl7g" for this suite. May 2 11:45:29.935: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 11:45:29.962: INFO: namespace: e2e-tests-container-probe-rsl7g, resource: bindings, ignored listing per whitelist May 2 11:45:30.011: INFO: namespace e2e-tests-container-probe-rsl7g deletion completed in 6.085701432s • [SLOW TEST:62.487 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 2 11:45:30.011: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 2 11:45:30.118: INFO: Creating ReplicaSet my-hostname-basic-6c64b64f-8c6a-11ea-8045-0242ac110017 May 2 11:45:30.161: INFO: Pod name my-hostname-basic-6c64b64f-8c6a-11ea-8045-0242ac110017: Found 0 pods out of 1 May 2 11:45:35.179: INFO: Pod name my-hostname-basic-6c64b64f-8c6a-11ea-8045-0242ac110017: Found 1 pods out of 1 May 2 11:45:35.179: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-6c64b64f-8c6a-11ea-8045-0242ac110017" is running May 2 11:45:35.183: INFO: Pod "my-hostname-basic-6c64b64f-8c6a-11ea-8045-0242ac110017-f9mgr" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-02 11:45:30 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-02 11:45:32 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-02 11:45:32 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-02 11:45:30 +0000 UTC Reason: Message:}]) May 2 11:45:35.183: INFO: Trying to dial the pod May 2 11:45:40.193: INFO: Controller my-hostname-basic-6c64b64f-8c6a-11ea-8045-0242ac110017: Got expected result from replica 1 [my-hostname-basic-6c64b64f-8c6a-11ea-8045-0242ac110017-f9mgr]: "my-hostname-basic-6c64b64f-8c6a-11ea-8045-0242ac110017-f9mgr", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 2 11:45:40.193: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-replicaset-479xs" for this suite. May 2 11:45:46.222: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 11:45:46.290: INFO: namespace: e2e-tests-replicaset-479xs, resource: bindings, ignored listing per whitelist May 2 11:45:46.298: INFO: namespace e2e-tests-replicaset-479xs deletion completed in 6.100859912s • [SLOW TEST:16.286 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 2 11:45:46.298: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: getting the auto-created API token May 2 11:45:47.019: INFO: created pod pod-service-account-defaultsa May 2 11:45:47.019: INFO: pod pod-service-account-defaultsa service account token volume mount: true May 2 11:45:47.042: INFO: created pod pod-service-account-mountsa May 2 11:45:47.042: INFO: pod pod-service-account-mountsa service account token volume mount: true May 2 11:45:47.062: INFO: created pod pod-service-account-nomountsa May 2 11:45:47.062: INFO: pod pod-service-account-nomountsa service account token volume mount: false May 2 11:45:47.077: INFO: created pod pod-service-account-defaultsa-mountspec May 2 11:45:47.077: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true May 2 11:45:47.133: INFO: created pod pod-service-account-mountsa-mountspec May 2 11:45:47.133: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true May 2 11:45:47.204: INFO: created pod pod-service-account-nomountsa-mountspec May 2 11:45:47.204: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true May 2 11:45:47.249: INFO: created pod pod-service-account-defaultsa-nomountspec May 2 11:45:47.249: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false May 2 11:45:47.279: INFO: created pod pod-service-account-mountsa-nomountspec May 2 11:45:47.279: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false May 2 11:45:47.343: INFO: created pod pod-service-account-nomountsa-nomountspec May 2 11:45:47.343: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 2 11:45:47.343: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-svcaccounts-d7s66" for this suite. May 2 11:46:19.423: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 11:46:19.487: INFO: namespace: e2e-tests-svcaccounts-d7s66, resource: bindings, ignored listing per whitelist May 2 11:46:19.536: INFO: namespace e2e-tests-svcaccounts-d7s66 deletion completed in 32.161291085s • [SLOW TEST:33.238 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:22 should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 2 11:46:19.536: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-upd-89e8397c-8c6a-11ea-8045-0242ac110017 STEP: Creating the pod STEP: Waiting for pod with text data STEP: Waiting for pod with binary data [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 2 11:46:25.674: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-hcbsb" for this suite. May 2 11:46:47.713: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 11:46:47.720: INFO: namespace: e2e-tests-configmap-hcbsb, resource: bindings, ignored listing per whitelist May 2 11:46:47.787: INFO: namespace e2e-tests-configmap-hcbsb deletion completed in 22.110742522s • [SLOW TEST:28.250 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 2 11:46:47.787: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-qdztp STEP: creating a selector STEP: Creating the service pods in kubernetes May 2 11:46:47.906: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods May 2 11:47:12.038: INFO: ExecWithOptions {Command:[/bin/sh -c echo 'hostName' | nc -w 1 -u 10.244.2.223 8081 | grep -v '^\s*$'] Namespace:e2e-tests-pod-network-test-qdztp PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 2 11:47:12.038: INFO: >>> kubeConfig: /root/.kube/config I0502 11:47:12.072005 6 log.go:172] (0xc0011c22c0) (0xc002243f40) Create stream I0502 11:47:12.072029 6 log.go:172] (0xc0011c22c0) (0xc002243f40) Stream added, broadcasting: 1 I0502 11:47:12.074707 6 log.go:172] (0xc0011c22c0) Reply frame received for 1 I0502 11:47:12.074754 6 log.go:172] (0xc0011c22c0) (0xc001feb720) Create stream I0502 11:47:12.074775 6 log.go:172] (0xc0011c22c0) (0xc001feb720) Stream added, broadcasting: 3 I0502 11:47:12.075825 6 log.go:172] (0xc0011c22c0) Reply frame received for 3 I0502 11:47:12.075869 6 log.go:172] (0xc0011c22c0) (0xc00229a0a0) Create stream I0502 11:47:12.075884 6 log.go:172] (0xc0011c22c0) (0xc00229a0a0) Stream added, broadcasting: 5 I0502 11:47:12.076786 6 log.go:172] (0xc0011c22c0) Reply frame received for 5 I0502 11:47:13.120674 6 log.go:172] (0xc0011c22c0) Data frame received for 3 I0502 11:47:13.120722 6 log.go:172] (0xc001feb720) (3) Data frame handling I0502 11:47:13.120757 6 log.go:172] (0xc001feb720) (3) Data frame sent I0502 11:47:13.120776 6 log.go:172] (0xc0011c22c0) Data frame received for 3 I0502 11:47:13.120821 6 log.go:172] (0xc001feb720) (3) Data frame handling I0502 11:47:13.121730 6 log.go:172] (0xc0011c22c0) Data frame received for 5 I0502 11:47:13.121756 6 log.go:172] (0xc00229a0a0) (5) Data frame handling I0502 11:47:13.123042 6 log.go:172] (0xc0011c22c0) Data frame received for 1 I0502 11:47:13.123055 6 log.go:172] (0xc002243f40) (1) Data frame handling I0502 11:47:13.123062 6 log.go:172] (0xc002243f40) (1) Data frame sent I0502 11:47:13.123076 6 log.go:172] (0xc0011c22c0) (0xc002243f40) Stream removed, broadcasting: 1 I0502 11:47:13.123109 6 log.go:172] (0xc0011c22c0) Go away received I0502 11:47:13.123140 6 log.go:172] (0xc0011c22c0) (0xc002243f40) Stream removed, broadcasting: 1 I0502 11:47:13.123152 6 log.go:172] (0xc0011c22c0) (0xc001feb720) Stream removed, broadcasting: 3 I0502 11:47:13.123158 6 log.go:172] (0xc0011c22c0) (0xc00229a0a0) Stream removed, broadcasting: 5 May 2 11:47:13.123: INFO: Found all expected endpoints: [netserver-0] May 2 11:47:13.145: INFO: ExecWithOptions {Command:[/bin/sh -c echo 'hostName' | nc -w 1 -u 10.244.1.206 8081 | grep -v '^\s*$'] Namespace:e2e-tests-pod-network-test-qdztp PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 2 11:47:13.145: INFO: >>> kubeConfig: /root/.kube/config I0502 11:47:13.172797 6 log.go:172] (0xc0011cc2c0) (0xc0022f00a0) Create stream I0502 11:47:13.172828 6 log.go:172] (0xc0011cc2c0) (0xc0022f00a0) Stream added, broadcasting: 1 I0502 11:47:13.185741 6 log.go:172] (0xc0011cc2c0) Reply frame received for 1 I0502 11:47:13.185781 6 log.go:172] (0xc0011cc2c0) (0xc0022f0140) Create stream I0502 11:47:13.185789 6 log.go:172] (0xc0011cc2c0) (0xc0022f0140) Stream added, broadcasting: 3 I0502 11:47:13.186467 6 log.go:172] (0xc0011cc2c0) Reply frame received for 3 I0502 11:47:13.186496 6 log.go:172] (0xc0011cc2c0) (0xc00229a140) Create stream I0502 11:47:13.186512 6 log.go:172] (0xc0011cc2c0) (0xc00229a140) Stream added, broadcasting: 5 I0502 11:47:13.187304 6 log.go:172] (0xc0011cc2c0) Reply frame received for 5 I0502 11:47:14.251355 6 log.go:172] (0xc0011cc2c0) Data frame received for 3 I0502 11:47:14.251422 6 log.go:172] (0xc0022f0140) (3) Data frame handling I0502 11:47:14.251471 6 log.go:172] (0xc0022f0140) (3) Data frame sent I0502 11:47:14.251497 6 log.go:172] (0xc0011cc2c0) Data frame received for 3 I0502 11:47:14.251514 6 log.go:172] (0xc0022f0140) (3) Data frame handling I0502 11:47:14.251651 6 log.go:172] (0xc0011cc2c0) Data frame received for 5 I0502 11:47:14.251670 6 log.go:172] (0xc00229a140) (5) Data frame handling I0502 11:47:14.253895 6 log.go:172] (0xc0011cc2c0) Data frame received for 1 I0502 11:47:14.253914 6 log.go:172] (0xc0022f00a0) (1) Data frame handling I0502 11:47:14.253923 6 log.go:172] (0xc0022f00a0) (1) Data frame sent I0502 11:47:14.253930 6 log.go:172] (0xc0011cc2c0) (0xc0022f00a0) Stream removed, broadcasting: 1 I0502 11:47:14.253994 6 log.go:172] (0xc0011cc2c0) (0xc0022f00a0) Stream removed, broadcasting: 1 I0502 11:47:14.254015 6 log.go:172] (0xc0011cc2c0) (0xc0022f0140) Stream removed, broadcasting: 3 I0502 11:47:14.254020 6 log.go:172] (0xc0011cc2c0) (0xc00229a140) Stream removed, broadcasting: 5 May 2 11:47:14.254: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 2 11:47:14.254: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready I0502 11:47:14.254234 6 log.go:172] (0xc0011cc2c0) Go away received STEP: Destroying namespace "e2e-tests-pod-network-test-qdztp" for this suite. May 2 11:47:38.303: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 11:47:38.322: INFO: namespace: e2e-tests-pod-network-test-qdztp, resource: bindings, ignored listing per whitelist May 2 11:47:38.376: INFO: namespace e2e-tests-pod-network-test-qdztp deletion completed in 24.119085312s • [SLOW TEST:50.589 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for node-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 2 11:47:38.376: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward api env vars May 2 11:47:38.487: INFO: Waiting up to 5m0s for pod "downward-api-b8e6ad65-8c6a-11ea-8045-0242ac110017" in namespace "e2e-tests-downward-api-xhtrd" to be "success or failure" May 2 11:47:38.521: INFO: Pod "downward-api-b8e6ad65-8c6a-11ea-8045-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 33.885544ms May 2 11:47:40.552: INFO: Pod "downward-api-b8e6ad65-8c6a-11ea-8045-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.064862416s May 2 11:47:42.556: INFO: Pod "downward-api-b8e6ad65-8c6a-11ea-8045-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.068815989s STEP: Saw pod success May 2 11:47:42.556: INFO: Pod "downward-api-b8e6ad65-8c6a-11ea-8045-0242ac110017" satisfied condition "success or failure" May 2 11:47:42.560: INFO: Trying to get logs from node hunter-worker pod downward-api-b8e6ad65-8c6a-11ea-8045-0242ac110017 container dapi-container: STEP: delete the pod May 2 11:47:42.576: INFO: Waiting for pod downward-api-b8e6ad65-8c6a-11ea-8045-0242ac110017 to disappear May 2 11:47:42.599: INFO: Pod downward-api-b8e6ad65-8c6a-11ea-8045-0242ac110017 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 2 11:47:42.599: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-xhtrd" for this suite. May 2 11:47:48.637: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 11:47:48.682: INFO: namespace: e2e-tests-downward-api-xhtrd, resource: bindings, ignored listing per whitelist May 2 11:47:48.712: INFO: namespace e2e-tests-downward-api-xhtrd deletion completed in 6.109225784s • [SLOW TEST:10.336 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38 should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 2 11:47:48.712: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on tmpfs should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir volume type on tmpfs May 2 11:47:48.870: INFO: Waiting up to 5m0s for pod "pod-bf166c12-8c6a-11ea-8045-0242ac110017" in namespace "e2e-tests-emptydir-2f9tf" to be "success or failure" May 2 11:47:49.011: INFO: Pod "pod-bf166c12-8c6a-11ea-8045-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 140.427209ms May 2 11:47:51.014: INFO: Pod "pod-bf166c12-8c6a-11ea-8045-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.143712834s May 2 11:47:53.018: INFO: Pod "pod-bf166c12-8c6a-11ea-8045-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.147971258s STEP: Saw pod success May 2 11:47:53.018: INFO: Pod "pod-bf166c12-8c6a-11ea-8045-0242ac110017" satisfied condition "success or failure" May 2 11:47:53.021: INFO: Trying to get logs from node hunter-worker2 pod pod-bf166c12-8c6a-11ea-8045-0242ac110017 container test-container: STEP: delete the pod May 2 11:47:53.205: INFO: Waiting for pod pod-bf166c12-8c6a-11ea-8045-0242ac110017 to disappear May 2 11:47:53.216: INFO: Pod pod-bf166c12-8c6a-11ea-8045-0242ac110017 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 2 11:47:53.216: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-2f9tf" for this suite. May 2 11:47:59.232: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 11:47:59.262: INFO: namespace: e2e-tests-emptydir-2f9tf, resource: bindings, ignored listing per whitelist May 2 11:47:59.298: INFO: namespace e2e-tests-emptydir-2f9tf deletion completed in 6.078159461s • [SLOW TEST:10.585 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 volume on tmpfs should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 2 11:47:59.298: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with secret that has name projected-secret-test-map-c55c4f21-8c6a-11ea-8045-0242ac110017 STEP: Creating a pod to test consume secrets May 2 11:47:59.404: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-c55e9ff4-8c6a-11ea-8045-0242ac110017" in namespace "e2e-tests-projected-sbmdq" to be "success or failure" May 2 11:47:59.423: INFO: Pod "pod-projected-secrets-c55e9ff4-8c6a-11ea-8045-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 18.820918ms May 2 11:48:01.427: INFO: Pod "pod-projected-secrets-c55e9ff4-8c6a-11ea-8045-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022575973s May 2 11:48:03.431: INFO: Pod "pod-projected-secrets-c55e9ff4-8c6a-11ea-8045-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.027081725s STEP: Saw pod success May 2 11:48:03.431: INFO: Pod "pod-projected-secrets-c55e9ff4-8c6a-11ea-8045-0242ac110017" satisfied condition "success or failure" May 2 11:48:03.434: INFO: Trying to get logs from node hunter-worker pod pod-projected-secrets-c55e9ff4-8c6a-11ea-8045-0242ac110017 container projected-secret-volume-test: STEP: delete the pod May 2 11:48:03.486: INFO: Waiting for pod pod-projected-secrets-c55e9ff4-8c6a-11ea-8045-0242ac110017 to disappear May 2 11:48:03.503: INFO: Pod pod-projected-secrets-c55e9ff4-8c6a-11ea-8045-0242ac110017 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 2 11:48:03.503: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-sbmdq" for this suite. May 2 11:48:09.519: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 11:48:09.530: INFO: namespace: e2e-tests-projected-sbmdq, resource: bindings, ignored listing per whitelist May 2 11:48:09.592: INFO: namespace e2e-tests-projected-sbmdq deletion completed in 6.084330393s • [SLOW TEST:10.294 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 2 11:48:09.592: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename prestop STEP: Waiting for a default service account to be provisioned in namespace [It] should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating server pod server in namespace e2e-tests-prestop-5xc6h STEP: Waiting for pods to come up. STEP: Creating tester pod tester in namespace e2e-tests-prestop-5xc6h STEP: Deleting pre-stop pod May 2 11:48:22.734: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": { "prestop": 1 }, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } STEP: Deleting the server pod [AfterEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 2 11:48:22.740: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-prestop-5xc6h" for this suite. May 2 11:49:01.155: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 11:49:01.199: INFO: namespace: e2e-tests-prestop-5xc6h, resource: bindings, ignored listing per whitelist May 2 11:49:01.231: INFO: namespace e2e-tests-prestop-5xc6h deletion completed in 38.427497528s • [SLOW TEST:51.638 seconds] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 2 11:49:01.231: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-ea46a3bf-8c6a-11ea-8045-0242ac110017 STEP: Creating a pod to test consume configMaps May 2 11:49:01.339: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-ea4875fe-8c6a-11ea-8045-0242ac110017" in namespace "e2e-tests-projected-g22t9" to be "success or failure" May 2 11:49:01.356: INFO: Pod "pod-projected-configmaps-ea4875fe-8c6a-11ea-8045-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 16.657706ms May 2 11:49:03.360: INFO: Pod "pod-projected-configmaps-ea4875fe-8c6a-11ea-8045-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021125007s May 2 11:49:05.369: INFO: Pod "pod-projected-configmaps-ea4875fe-8c6a-11ea-8045-0242ac110017": Phase="Running", Reason="", readiness=true. Elapsed: 4.029178993s May 2 11:49:07.373: INFO: Pod "pod-projected-configmaps-ea4875fe-8c6a-11ea-8045-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.033263844s STEP: Saw pod success May 2 11:49:07.373: INFO: Pod "pod-projected-configmaps-ea4875fe-8c6a-11ea-8045-0242ac110017" satisfied condition "success or failure" May 2 11:49:07.376: INFO: Trying to get logs from node hunter-worker2 pod pod-projected-configmaps-ea4875fe-8c6a-11ea-8045-0242ac110017 container projected-configmap-volume-test: STEP: delete the pod May 2 11:49:07.392: INFO: Waiting for pod pod-projected-configmaps-ea4875fe-8c6a-11ea-8045-0242ac110017 to disappear May 2 11:49:07.408: INFO: Pod pod-projected-configmaps-ea4875fe-8c6a-11ea-8045-0242ac110017 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 2 11:49:07.409: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-g22t9" for this suite. May 2 11:49:13.425: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 11:49:13.463: INFO: namespace: e2e-tests-projected-g22t9, resource: bindings, ignored listing per whitelist May 2 11:49:13.498: INFO: namespace e2e-tests-projected-g22t9 deletion completed in 6.08598966s • [SLOW TEST:12.267 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 2 11:49:13.498: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating the pod May 2 11:49:20.332: INFO: Successfully updated pod "labelsupdatef197f010-8c6a-11ea-8045-0242ac110017" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 2 11:49:22.367: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-nj9pq" for this suite. May 2 11:49:44.389: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 11:49:44.412: INFO: namespace: e2e-tests-projected-nj9pq, resource: bindings, ignored listing per whitelist May 2 11:49:44.498: INFO: namespace e2e-tests-projected-nj9pq deletion completed in 22.126405918s • [SLOW TEST:31.000 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 2 11:49:44.498: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with secret that has name projected-secret-test-0412276b-8c6b-11ea-8045-0242ac110017 STEP: Creating a pod to test consume secrets May 2 11:49:44.610: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-0413e2d1-8c6b-11ea-8045-0242ac110017" in namespace "e2e-tests-projected-kvs8h" to be "success or failure" May 2 11:49:44.614: INFO: Pod "pod-projected-secrets-0413e2d1-8c6b-11ea-8045-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 4.056392ms May 2 11:49:46.617: INFO: Pod "pod-projected-secrets-0413e2d1-8c6b-11ea-8045-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007484467s May 2 11:49:48.692: INFO: Pod "pod-projected-secrets-0413e2d1-8c6b-11ea-8045-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.081793057s STEP: Saw pod success May 2 11:49:48.692: INFO: Pod "pod-projected-secrets-0413e2d1-8c6b-11ea-8045-0242ac110017" satisfied condition "success or failure" May 2 11:49:48.695: INFO: Trying to get logs from node hunter-worker2 pod pod-projected-secrets-0413e2d1-8c6b-11ea-8045-0242ac110017 container projected-secret-volume-test: STEP: delete the pod May 2 11:49:48.717: INFO: Waiting for pod pod-projected-secrets-0413e2d1-8c6b-11ea-8045-0242ac110017 to disappear May 2 11:49:48.722: INFO: Pod pod-projected-secrets-0413e2d1-8c6b-11ea-8045-0242ac110017 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 2 11:49:48.722: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-kvs8h" for this suite. May 2 11:49:54.737: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 11:49:54.807: INFO: namespace: e2e-tests-projected-kvs8h, resource: bindings, ignored listing per whitelist May 2 11:49:54.833: INFO: namespace e2e-tests-projected-kvs8h deletion completed in 6.10753515s • [SLOW TEST:10.335 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 2 11:49:54.833: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-0a38e9ef-8c6b-11ea-8045-0242ac110017 STEP: Creating a pod to test consume configMaps May 2 11:49:54.932: INFO: Waiting up to 5m0s for pod "pod-configmaps-0a3b23c0-8c6b-11ea-8045-0242ac110017" in namespace "e2e-tests-configmap-5k8bx" to be "success or failure" May 2 11:49:54.937: INFO: Pod "pod-configmaps-0a3b23c0-8c6b-11ea-8045-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 5.032231ms May 2 11:49:56.941: INFO: Pod "pod-configmaps-0a3b23c0-8c6b-11ea-8045-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009202445s May 2 11:49:58.945: INFO: Pod "pod-configmaps-0a3b23c0-8c6b-11ea-8045-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 4.013070743s May 2 11:50:00.949: INFO: Pod "pod-configmaps-0a3b23c0-8c6b-11ea-8045-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.017099725s STEP: Saw pod success May 2 11:50:00.949: INFO: Pod "pod-configmaps-0a3b23c0-8c6b-11ea-8045-0242ac110017" satisfied condition "success or failure" May 2 11:50:00.953: INFO: Trying to get logs from node hunter-worker2 pod pod-configmaps-0a3b23c0-8c6b-11ea-8045-0242ac110017 container configmap-volume-test: STEP: delete the pod May 2 11:50:01.008: INFO: Waiting for pod pod-configmaps-0a3b23c0-8c6b-11ea-8045-0242ac110017 to disappear May 2 11:50:01.021: INFO: Pod pod-configmaps-0a3b23c0-8c6b-11ea-8045-0242ac110017 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 2 11:50:01.021: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-5k8bx" for this suite. May 2 11:50:07.037: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 11:50:07.059: INFO: namespace: e2e-tests-configmap-5k8bx, resource: bindings, ignored listing per whitelist May 2 11:50:07.110: INFO: namespace e2e-tests-configmap-5k8bx deletion completed in 6.085450376s • [SLOW TEST:12.277 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Guestbook application should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 2 11:50:07.111: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating all guestbook components May 2 11:50:08.071: INFO: apiVersion: v1 kind: Service metadata: name: redis-slave labels: app: redis role: slave tier: backend spec: ports: - port: 6379 selector: app: redis role: slave tier: backend May 2 11:50:08.072: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-ns6sc' May 2 11:50:09.456: INFO: stderr: "" May 2 11:50:09.456: INFO: stdout: "service/redis-slave created\n" May 2 11:50:09.457: INFO: apiVersion: v1 kind: Service metadata: name: redis-master labels: app: redis role: master tier: backend spec: ports: - port: 6379 targetPort: 6379 selector: app: redis role: master tier: backend May 2 11:50:09.457: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-ns6sc' May 2 11:50:11.143: INFO: stderr: "" May 2 11:50:11.143: INFO: stdout: "service/redis-master created\n" May 2 11:50:11.143: INFO: apiVersion: v1 kind: Service metadata: name: frontend labels: app: guestbook tier: frontend spec: # if your cluster supports it, uncomment the following to automatically create # an external load-balanced IP for the frontend service. # type: LoadBalancer ports: - port: 80 selector: app: guestbook tier: frontend May 2 11:50:11.143: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-ns6sc' May 2 11:50:12.105: INFO: stderr: "" May 2 11:50:12.105: INFO: stdout: "service/frontend created\n" May 2 11:50:12.105: INFO: apiVersion: extensions/v1beta1 kind: Deployment metadata: name: frontend spec: replicas: 3 template: metadata: labels: app: guestbook tier: frontend spec: containers: - name: php-redis image: gcr.io/google-samples/gb-frontend:v6 resources: requests: cpu: 100m memory: 100Mi env: - name: GET_HOSTS_FROM value: dns # If your cluster config does not include a dns service, then to # instead access environment variables to find service host # info, comment out the 'value: dns' line above, and uncomment the # line below: # value: env ports: - containerPort: 80 May 2 11:50:12.105: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-ns6sc' May 2 11:50:13.614: INFO: stderr: "" May 2 11:50:13.614: INFO: stdout: "deployment.extensions/frontend created\n" May 2 11:50:13.614: INFO: apiVersion: extensions/v1beta1 kind: Deployment metadata: name: redis-master spec: replicas: 1 template: metadata: labels: app: redis role: master tier: backend spec: containers: - name: master image: gcr.io/kubernetes-e2e-test-images/redis:1.0 resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 May 2 11:50:13.614: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-ns6sc' May 2 11:50:14.711: INFO: stderr: "" May 2 11:50:14.711: INFO: stdout: "deployment.extensions/redis-master created\n" May 2 11:50:14.711: INFO: apiVersion: extensions/v1beta1 kind: Deployment metadata: name: redis-slave spec: replicas: 2 template: metadata: labels: app: redis role: slave tier: backend spec: containers: - name: slave image: gcr.io/google-samples/gb-redisslave:v3 resources: requests: cpu: 100m memory: 100Mi env: - name: GET_HOSTS_FROM value: dns # If your cluster config does not include a dns service, then to # instead access an environment variable to find the master # service's host, comment out the 'value: dns' line above, and # uncomment the line below: # value: env ports: - containerPort: 6379 May 2 11:50:14.711: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-ns6sc' May 2 11:50:16.506: INFO: stderr: "" May 2 11:50:16.506: INFO: stdout: "deployment.extensions/redis-slave created\n" STEP: validating guestbook app May 2 11:50:16.506: INFO: Waiting for all frontend pods to be Running. May 2 11:50:31.557: INFO: Waiting for frontend to serve content. May 2 11:50:31.693: INFO: Trying to add a new entry to the guestbook. May 2 11:50:31.892: INFO: Verifying that added entry can be retrieved. STEP: using delete to clean up resources May 2 11:50:31.903: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-ns6sc' May 2 11:50:32.588: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 2 11:50:32.588: INFO: stdout: "service \"redis-slave\" force deleted\n" STEP: using delete to clean up resources May 2 11:50:32.588: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-ns6sc' May 2 11:50:33.416: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 2 11:50:33.416: INFO: stdout: "service \"redis-master\" force deleted\n" STEP: using delete to clean up resources May 2 11:50:33.417: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-ns6sc' May 2 11:50:34.790: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 2 11:50:34.790: INFO: stdout: "service \"frontend\" force deleted\n" STEP: using delete to clean up resources May 2 11:50:34.790: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-ns6sc' May 2 11:50:35.202: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 2 11:50:35.202: INFO: stdout: "deployment.extensions \"frontend\" force deleted\n" STEP: using delete to clean up resources May 2 11:50:35.202: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-ns6sc' May 2 11:50:36.119: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 2 11:50:36.119: INFO: stdout: "deployment.extensions \"redis-master\" force deleted\n" STEP: using delete to clean up resources May 2 11:50:36.120: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-ns6sc' May 2 11:50:37.123: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 2 11:50:37.123: INFO: stdout: "deployment.extensions \"redis-slave\" force deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 2 11:50:37.123: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-ns6sc" for this suite. May 2 11:51:24.078: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 11:51:24.094: INFO: namespace: e2e-tests-kubectl-ns6sc, resource: bindings, ignored listing per whitelist May 2 11:51:24.152: INFO: namespace e2e-tests-kubectl-ns6sc deletion completed in 46.804370387s • [SLOW TEST:77.041 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Guestbook application /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 2 11:51:24.152: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43 [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod May 2 11:51:24.632: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 2 11:51:33.185: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-init-container-p84f8" for this suite. May 2 11:51:39.249: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 11:51:39.361: INFO: namespace: e2e-tests-init-container-p84f8, resource: bindings, ignored listing per whitelist May 2 11:51:39.421: INFO: namespace e2e-tests-init-container-p84f8 deletion completed in 6.192002131s • [SLOW TEST:15.269 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-network] DNS should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 2 11:51:39.422: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default;check="$$(dig +tcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;test -n "$$(getent hosts dns-querier-1.dns-test-service.e2e-tests-dns-zdvxp.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-zdvxp.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-zdvxp.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default;check="$$(dig +tcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;test -n "$$(getent hosts dns-querier-1.dns-test-service.e2e-tests-dns-zdvxp.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-zdvxp.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-zdvxp.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 2 11:51:46.223: INFO: DNS probes using e2e-tests-dns-zdvxp/dns-test-48e57307-8c6b-11ea-8045-0242ac110017 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 2 11:51:46.361: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-dns-zdvxp" for this suite. May 2 11:51:52.565: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 11:51:52.618: INFO: namespace: e2e-tests-dns-zdvxp, resource: bindings, ignored listing per whitelist May 2 11:51:52.630: INFO: namespace e2e-tests-dns-zdvxp deletion completed in 6.215528642s • [SLOW TEST:13.209 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 2 11:51:52.630: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin May 2 11:51:52.775: INFO: Waiting up to 5m0s for pod "downwardapi-volume-50785206-8c6b-11ea-8045-0242ac110017" in namespace "e2e-tests-projected-hjdlx" to be "success or failure" May 2 11:51:52.822: INFO: Pod "downwardapi-volume-50785206-8c6b-11ea-8045-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 46.473942ms May 2 11:51:54.825: INFO: Pod "downwardapi-volume-50785206-8c6b-11ea-8045-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.050197576s May 2 11:51:56.831: INFO: Pod "downwardapi-volume-50785206-8c6b-11ea-8045-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 4.055884788s May 2 11:51:58.834: INFO: Pod "downwardapi-volume-50785206-8c6b-11ea-8045-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.059273525s STEP: Saw pod success May 2 11:51:58.834: INFO: Pod "downwardapi-volume-50785206-8c6b-11ea-8045-0242ac110017" satisfied condition "success or failure" May 2 11:51:58.836: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-50785206-8c6b-11ea-8045-0242ac110017 container client-container: STEP: delete the pod May 2 11:51:58.912: INFO: Waiting for pod downwardapi-volume-50785206-8c6b-11ea-8045-0242ac110017 to disappear May 2 11:51:58.948: INFO: Pod downwardapi-volume-50785206-8c6b-11ea-8045-0242ac110017 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 2 11:51:58.948: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-hjdlx" for this suite. May 2 11:52:04.999: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 11:52:05.052: INFO: namespace: e2e-tests-projected-hjdlx, resource: bindings, ignored listing per whitelist May 2 11:52:05.075: INFO: namespace e2e-tests-projected-hjdlx deletion completed in 6.124002787s • [SLOW TEST:12.445 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 2 11:52:05.076: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin May 2 11:52:05.217: INFO: Waiting up to 5m0s for pod "downwardapi-volume-57dc33a2-8c6b-11ea-8045-0242ac110017" in namespace "e2e-tests-downward-api-ps56w" to be "success or failure" May 2 11:52:05.229: INFO: Pod "downwardapi-volume-57dc33a2-8c6b-11ea-8045-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 11.941506ms May 2 11:52:07.233: INFO: Pod "downwardapi-volume-57dc33a2-8c6b-11ea-8045-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015761211s May 2 11:52:09.455: INFO: Pod "downwardapi-volume-57dc33a2-8c6b-11ea-8045-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.237936702s STEP: Saw pod success May 2 11:52:09.455: INFO: Pod "downwardapi-volume-57dc33a2-8c6b-11ea-8045-0242ac110017" satisfied condition "success or failure" May 2 11:52:09.468: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-57dc33a2-8c6b-11ea-8045-0242ac110017 container client-container: STEP: delete the pod May 2 11:52:09.766: INFO: Waiting for pod downwardapi-volume-57dc33a2-8c6b-11ea-8045-0242ac110017 to disappear May 2 11:52:09.770: INFO: Pod downwardapi-volume-57dc33a2-8c6b-11ea-8045-0242ac110017 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 2 11:52:09.770: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-ps56w" for this suite. May 2 11:52:15.801: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 11:52:15.854: INFO: namespace: e2e-tests-downward-api-ps56w, resource: bindings, ignored listing per whitelist May 2 11:52:15.875: INFO: namespace e2e-tests-downward-api-ps56w deletion completed in 6.10163884s • [SLOW TEST:10.799 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 2 11:52:15.875: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Cleaning up the secret STEP: Cleaning up the configmap STEP: Cleaning up the pod [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 2 11:52:20.618: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-wrapper-4md74" for this suite. May 2 11:52:26.728: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 11:52:26.747: INFO: namespace: e2e-tests-emptydir-wrapper-4md74, resource: bindings, ignored listing per whitelist May 2 11:52:26.843: INFO: namespace e2e-tests-emptydir-wrapper-4md74 deletion completed in 6.178198209s • [SLOW TEST:10.968 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run rc should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 2 11:52:26.843: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1298 [It] should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine May 2 11:52:26.919: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=e2e-tests-kubectl-nw445' May 2 11:52:29.721: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" May 2 11:52:29.721: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n" STEP: verifying the rc e2e-test-nginx-rc was created STEP: verifying the pod controlled by rc e2e-test-nginx-rc was created STEP: confirm that you can get logs from an rc May 2 11:52:29.805: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-nginx-rc-knddb] May 2 11:52:29.805: INFO: Waiting up to 5m0s for pod "e2e-test-nginx-rc-knddb" in namespace "e2e-tests-kubectl-nw445" to be "running and ready" May 2 11:52:29.808: INFO: Pod "e2e-test-nginx-rc-knddb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.524135ms May 2 11:52:31.999: INFO: Pod "e2e-test-nginx-rc-knddb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.194322615s May 2 11:52:34.004: INFO: Pod "e2e-test-nginx-rc-knddb": Phase="Running", Reason="", readiness=true. Elapsed: 4.198824322s May 2 11:52:34.004: INFO: Pod "e2e-test-nginx-rc-knddb" satisfied condition "running and ready" May 2 11:52:34.004: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-nginx-rc-knddb] May 2 11:52:34.004: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs rc/e2e-test-nginx-rc --namespace=e2e-tests-kubectl-nw445' May 2 11:52:34.134: INFO: stderr: "" May 2 11:52:34.134: INFO: stdout: "" [AfterEach] [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1303 May 2 11:52:34.134: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=e2e-tests-kubectl-nw445' May 2 11:52:34.252: INFO: stderr: "" May 2 11:52:34.252: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 2 11:52:34.252: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-nw445" for this suite. May 2 11:52:56.265: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 11:52:56.297: INFO: namespace: e2e-tests-kubectl-nw445, resource: bindings, ignored listing per whitelist May 2 11:52:56.339: INFO: namespace e2e-tests-kubectl-nw445 deletion completed in 22.083472085s • [SLOW TEST:29.495 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 2 11:52:56.339: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin May 2 11:52:56.447: INFO: Waiting up to 5m0s for pod "downwardapi-volume-766ad7e7-8c6b-11ea-8045-0242ac110017" in namespace "e2e-tests-downward-api-tscdm" to be "success or failure" May 2 11:52:56.450: INFO: Pod "downwardapi-volume-766ad7e7-8c6b-11ea-8045-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 3.197349ms May 2 11:52:58.599: INFO: Pod "downwardapi-volume-766ad7e7-8c6b-11ea-8045-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.151881479s May 2 11:53:00.603: INFO: Pod "downwardapi-volume-766ad7e7-8c6b-11ea-8045-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.156170351s STEP: Saw pod success May 2 11:53:00.603: INFO: Pod "downwardapi-volume-766ad7e7-8c6b-11ea-8045-0242ac110017" satisfied condition "success or failure" May 2 11:53:00.606: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-766ad7e7-8c6b-11ea-8045-0242ac110017 container client-container: STEP: delete the pod May 2 11:53:00.636: INFO: Waiting for pod downwardapi-volume-766ad7e7-8c6b-11ea-8045-0242ac110017 to disappear May 2 11:53:00.643: INFO: Pod downwardapi-volume-766ad7e7-8c6b-11ea-8045-0242ac110017 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 2 11:53:00.643: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-tscdm" for this suite. May 2 11:53:06.694: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 11:53:06.720: INFO: namespace: e2e-tests-downward-api-tscdm, resource: bindings, ignored listing per whitelist May 2 11:53:06.774: INFO: namespace e2e-tests-downward-api-tscdm deletion completed in 6.127801215s • [SLOW TEST:10.435 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 2 11:53:06.775: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-7ca05da1-8c6b-11ea-8045-0242ac110017 STEP: Creating a pod to test consume configMaps May 2 11:53:06.928: INFO: Waiting up to 5m0s for pod "pod-configmaps-7ca3d375-8c6b-11ea-8045-0242ac110017" in namespace "e2e-tests-configmap-d4rxp" to be "success or failure" May 2 11:53:06.963: INFO: Pod "pod-configmaps-7ca3d375-8c6b-11ea-8045-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 35.028703ms May 2 11:53:08.967: INFO: Pod "pod-configmaps-7ca3d375-8c6b-11ea-8045-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.039345628s May 2 11:53:10.971: INFO: Pod "pod-configmaps-7ca3d375-8c6b-11ea-8045-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.04343987s STEP: Saw pod success May 2 11:53:10.971: INFO: Pod "pod-configmaps-7ca3d375-8c6b-11ea-8045-0242ac110017" satisfied condition "success or failure" May 2 11:53:10.974: INFO: Trying to get logs from node hunter-worker2 pod pod-configmaps-7ca3d375-8c6b-11ea-8045-0242ac110017 container configmap-volume-test: STEP: delete the pod May 2 11:53:11.046: INFO: Waiting for pod pod-configmaps-7ca3d375-8c6b-11ea-8045-0242ac110017 to disappear May 2 11:53:11.063: INFO: Pod pod-configmaps-7ca3d375-8c6b-11ea-8045-0242ac110017 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 2 11:53:11.063: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-d4rxp" for this suite. May 2 11:53:17.086: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 11:53:17.119: INFO: namespace: e2e-tests-configmap-d4rxp, resource: bindings, ignored listing per whitelist May 2 11:53:17.167: INFO: namespace e2e-tests-configmap-d4rxp deletion completed in 6.100424272s • [SLOW TEST:10.392 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 2 11:53:17.167: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:85 [It] should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 2 11:53:17.265: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-services-9g5x7" for this suite. May 2 11:53:23.437: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 11:53:23.468: INFO: namespace: e2e-tests-services-9g5x7, resource: bindings, ignored listing per whitelist May 2 11:53:23.514: INFO: namespace e2e-tests-services-9g5x7 deletion completed in 6.246257678s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:90 • [SLOW TEST:6.347 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 2 11:53:23.515: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74 STEP: Creating service test in namespace e2e-tests-statefulset-x4gxj [It] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a new StaefulSet May 2 11:53:24.072: INFO: Found 0 stateful pods, waiting for 3 May 2 11:53:34.151: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 2 11:53:34.151: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 2 11:53:34.151: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false May 2 11:53:44.077: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 2 11:53:44.077: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 2 11:53:44.077: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Updating stateful set template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine May 2 11:53:44.106: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Not applying an update when the partition is greater than the number of replicas STEP: Performing a canary update May 2 11:53:54.159: INFO: Updating stateful set ss2 May 2 11:53:54.393: INFO: Waiting for Pod e2e-tests-statefulset-x4gxj/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c STEP: Restoring Pods to the correct revision when they are deleted May 2 11:54:04.550: INFO: Found 1 stateful pods, waiting for 3 May 2 11:54:14.556: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 2 11:54:14.556: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 2 11:54:14.556: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Performing a phased rolling update May 2 11:54:14.581: INFO: Updating stateful set ss2 May 2 11:54:14.593: INFO: Waiting for Pod e2e-tests-statefulset-x4gxj/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c May 2 11:54:24.620: INFO: Updating stateful set ss2 May 2 11:54:24.629: INFO: Waiting for StatefulSet e2e-tests-statefulset-x4gxj/ss2 to complete update May 2 11:54:24.630: INFO: Waiting for Pod e2e-tests-statefulset-x4gxj/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85 May 2 11:54:34.638: INFO: Deleting all statefulset in ns e2e-tests-statefulset-x4gxj May 2 11:54:34.642: INFO: Scaling statefulset ss2 to 0 May 2 11:55:04.663: INFO: Waiting for statefulset status.replicas updated to 0 May 2 11:55:04.666: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 2 11:55:04.684: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-statefulset-x4gxj" for this suite. May 2 11:55:12.731: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 11:55:12.790: INFO: namespace: e2e-tests-statefulset-x4gxj, resource: bindings, ignored listing per whitelist May 2 11:55:12.817: INFO: namespace e2e-tests-statefulset-x4gxj deletion completed in 8.129768611s • [SLOW TEST:109.303 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 2 11:55:12.818: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65 [It] deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 2 11:55:12.956: INFO: Pod name cleanup-pod: Found 0 pods out of 1 May 2 11:55:17.971: INFO: Pod name cleanup-pod: Found 1 pods out of 1 STEP: ensuring each pod is running May 2 11:55:17.971: INFO: Creating deployment test-cleanup-deployment STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59 May 2 11:55:18.233: INFO: Deployment "test-cleanup-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment,GenerateName:,Namespace:e2e-tests-deployment-f6rnf,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-f6rnf/deployments/test-cleanup-deployment,UID:cae3fd12-8c6b-11ea-99e8-0242ac110002,ResourceVersion:8346014,Generation:1,CreationTimestamp:2020-05-02 11:55:18 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[],ReadyReplicas:0,CollisionCount:nil,},} May 2 11:55:18.243: INFO: New ReplicaSet of Deployment "test-cleanup-deployment" is nil. May 2 11:55:18.243: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment": May 2 11:55:18.244: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-controller,GenerateName:,Namespace:e2e-tests-deployment-f6rnf,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-f6rnf/replicasets/test-cleanup-controller,UID:c7c78acf-8c6b-11ea-99e8-0242ac110002,ResourceVersion:8346015,Generation:1,CreationTimestamp:2020-05-02 11:55:12 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 Deployment test-cleanup-deployment cae3fd12-8c6b-11ea-99e8-0242ac110002 0xc0021fb407 0xc0021fb408}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} May 2 11:55:18.291: INFO: Pod "test-cleanup-controller-wd5mn" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-controller-wd5mn,GenerateName:test-cleanup-controller-,Namespace:e2e-tests-deployment-f6rnf,SelfLink:/api/v1/namespaces/e2e-tests-deployment-f6rnf/pods/test-cleanup-controller-wd5mn,UID:c7cb738e-8c6b-11ea-99e8-0242ac110002,ResourceVersion:8346009,Generation:0,CreationTimestamp:2020-05-02 11:55:12 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-cleanup-controller c7c78acf-8c6b-11ea-99e8-0242ac110002 0xc002149eb7 0xc002149eb8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-sjg9d {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-sjg9d,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-sjg9d true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002149f30} {node.kubernetes.io/unreachable Exists NoExecute 0xc002149f50}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-02 11:55:13 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-02 11:55:16 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-02 11:55:16 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-02 11:55:12 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.3,PodIP:10.244.1.221,StartTime:2020-05-02 11:55:13 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-05-02 11:55:16 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://fe73e35aaa989642e91050e2b470f7f54a6c49b093dfdfc3d0443dcf339f37ec}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 2 11:55:18.291: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-deployment-f6rnf" for this suite. May 2 11:55:26.524: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 11:55:26.550: INFO: namespace: e2e-tests-deployment-f6rnf, resource: bindings, ignored listing per whitelist May 2 11:55:26.600: INFO: namespace e2e-tests-deployment-f6rnf deletion completed in 8.305280419s • [SLOW TEST:13.782 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 2 11:55:26.600: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Given a Pod with a 'name' label pod-adoption-release is created STEP: When a replicaset with a matching selector is created STEP: Then the orphan pod is adopted STEP: When the matched label of one of its pods change May 2 11:55:34.083: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 2 11:55:35.158: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-replicaset-2qh94" for this suite. May 2 11:55:59.184: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 11:55:59.237: INFO: namespace: e2e-tests-replicaset-2qh94, resource: bindings, ignored listing per whitelist May 2 11:55:59.277: INFO: namespace e2e-tests-replicaset-2qh94 deletion completed in 24.114762122s • [SLOW TEST:32.677 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 2 11:55:59.277: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102 [It] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. May 2 11:55:59.554: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 2 11:55:59.557: INFO: Number of nodes with available pods: 0 May 2 11:55:59.557: INFO: Node hunter-worker is running more than one daemon pod May 2 11:56:00.566: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 2 11:56:00.570: INFO: Number of nodes with available pods: 0 May 2 11:56:00.570: INFO: Node hunter-worker is running more than one daemon pod May 2 11:56:01.727: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 2 11:56:01.730: INFO: Number of nodes with available pods: 0 May 2 11:56:01.730: INFO: Node hunter-worker is running more than one daemon pod May 2 11:56:02.572: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 2 11:56:02.589: INFO: Number of nodes with available pods: 0 May 2 11:56:02.589: INFO: Node hunter-worker is running more than one daemon pod May 2 11:56:03.563: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 2 11:56:03.566: INFO: Number of nodes with available pods: 1 May 2 11:56:03.566: INFO: Node hunter-worker2 is running more than one daemon pod May 2 11:56:04.563: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 2 11:56:04.566: INFO: Number of nodes with available pods: 2 May 2 11:56:04.566: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Stop a daemon pod, check that the daemon pod is revived. May 2 11:56:04.583: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 2 11:56:04.588: INFO: Number of nodes with available pods: 1 May 2 11:56:04.588: INFO: Node hunter-worker2 is running more than one daemon pod May 2 11:56:05.594: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 2 11:56:05.598: INFO: Number of nodes with available pods: 1 May 2 11:56:05.598: INFO: Node hunter-worker2 is running more than one daemon pod May 2 11:56:06.593: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 2 11:56:06.596: INFO: Number of nodes with available pods: 1 May 2 11:56:06.596: INFO: Node hunter-worker2 is running more than one daemon pod May 2 11:56:07.594: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 2 11:56:07.598: INFO: Number of nodes with available pods: 1 May 2 11:56:07.598: INFO: Node hunter-worker2 is running more than one daemon pod May 2 11:56:08.594: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 2 11:56:08.598: INFO: Number of nodes with available pods: 1 May 2 11:56:08.598: INFO: Node hunter-worker2 is running more than one daemon pod May 2 11:56:09.594: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 2 11:56:09.598: INFO: Number of nodes with available pods: 1 May 2 11:56:09.598: INFO: Node hunter-worker2 is running more than one daemon pod May 2 11:56:10.593: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 2 11:56:10.597: INFO: Number of nodes with available pods: 1 May 2 11:56:10.597: INFO: Node hunter-worker2 is running more than one daemon pod May 2 11:56:11.611: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 2 11:56:11.614: INFO: Number of nodes with available pods: 1 May 2 11:56:11.614: INFO: Node hunter-worker2 is running more than one daemon pod May 2 11:56:12.594: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 2 11:56:12.597: INFO: Number of nodes with available pods: 1 May 2 11:56:12.597: INFO: Node hunter-worker2 is running more than one daemon pod May 2 11:56:13.593: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 2 11:56:13.596: INFO: Number of nodes with available pods: 1 May 2 11:56:13.596: INFO: Node hunter-worker2 is running more than one daemon pod May 2 11:56:14.593: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 2 11:56:14.596: INFO: Number of nodes with available pods: 1 May 2 11:56:14.596: INFO: Node hunter-worker2 is running more than one daemon pod May 2 11:56:15.593: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 2 11:56:15.596: INFO: Number of nodes with available pods: 1 May 2 11:56:15.596: INFO: Node hunter-worker2 is running more than one daemon pod May 2 11:56:16.593: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 2 11:56:16.596: INFO: Number of nodes with available pods: 2 May 2 11:56:16.596: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-zmgm4, will wait for the garbage collector to delete the pods May 2 11:56:17.011: INFO: Deleting DaemonSet.extensions daemon-set took: 358.281429ms May 2 11:56:17.111: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.259925ms May 2 11:56:31.815: INFO: Number of nodes with available pods: 0 May 2 11:56:31.815: INFO: Number of running nodes: 0, number of available pods: 0 May 2 11:56:31.818: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-zmgm4/daemonsets","resourceVersion":"8346295"},"items":null} May 2 11:56:31.820: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-zmgm4/pods","resourceVersion":"8346295"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 2 11:56:31.832: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-daemonsets-zmgm4" for this suite. May 2 11:56:37.849: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 11:56:37.888: INFO: namespace: e2e-tests-daemonsets-zmgm4, resource: bindings, ignored listing per whitelist May 2 11:56:37.926: INFO: namespace e2e-tests-daemonsets-zmgm4 deletion completed in 6.090156818s • [SLOW TEST:38.649 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 2 11:56:37.926: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-map-fa843e71-8c6b-11ea-8045-0242ac110017 STEP: Creating a pod to test consume configMaps May 2 11:56:38.088: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-fa85d4dc-8c6b-11ea-8045-0242ac110017" in namespace "e2e-tests-projected-8lrnq" to be "success or failure" May 2 11:56:38.123: INFO: Pod "pod-projected-configmaps-fa85d4dc-8c6b-11ea-8045-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 34.666515ms May 2 11:56:40.126: INFO: Pod "pod-projected-configmaps-fa85d4dc-8c6b-11ea-8045-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.038265405s May 2 11:56:42.131: INFO: Pod "pod-projected-configmaps-fa85d4dc-8c6b-11ea-8045-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.042717501s STEP: Saw pod success May 2 11:56:42.131: INFO: Pod "pod-projected-configmaps-fa85d4dc-8c6b-11ea-8045-0242ac110017" satisfied condition "success or failure" May 2 11:56:42.134: INFO: Trying to get logs from node hunter-worker pod pod-projected-configmaps-fa85d4dc-8c6b-11ea-8045-0242ac110017 container projected-configmap-volume-test: STEP: delete the pod May 2 11:56:42.151: INFO: Waiting for pod pod-projected-configmaps-fa85d4dc-8c6b-11ea-8045-0242ac110017 to disappear May 2 11:56:42.155: INFO: Pod pod-projected-configmaps-fa85d4dc-8c6b-11ea-8045-0242ac110017 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 2 11:56:42.155: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-8lrnq" for this suite. May 2 11:56:48.184: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 11:56:48.239: INFO: namespace: e2e-tests-projected-8lrnq, resource: bindings, ignored listing per whitelist May 2 11:56:48.267: INFO: namespace e2e-tests-projected-8lrnq deletion completed in 6.107967537s • [SLOW TEST:10.340 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 2 11:56:48.267: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with secret that has name projected-secret-test-00b44b64-8c6c-11ea-8045-0242ac110017 STEP: Creating a pod to test consume secrets May 2 11:56:48.468: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-00b4e63d-8c6c-11ea-8045-0242ac110017" in namespace "e2e-tests-projected-95mnk" to be "success or failure" May 2 11:56:48.479: INFO: Pod "pod-projected-secrets-00b4e63d-8c6c-11ea-8045-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 11.491181ms May 2 11:56:50.752: INFO: Pod "pod-projected-secrets-00b4e63d-8c6c-11ea-8045-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.283976907s May 2 11:56:52.756: INFO: Pod "pod-projected-secrets-00b4e63d-8c6c-11ea-8045-0242ac110017": Phase="Running", Reason="", readiness=true. Elapsed: 4.288297034s May 2 11:56:54.761: INFO: Pod "pod-projected-secrets-00b4e63d-8c6c-11ea-8045-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.293215153s STEP: Saw pod success May 2 11:56:54.761: INFO: Pod "pod-projected-secrets-00b4e63d-8c6c-11ea-8045-0242ac110017" satisfied condition "success or failure" May 2 11:56:54.764: INFO: Trying to get logs from node hunter-worker2 pod pod-projected-secrets-00b4e63d-8c6c-11ea-8045-0242ac110017 container projected-secret-volume-test: STEP: delete the pod May 2 11:56:54.824: INFO: Waiting for pod pod-projected-secrets-00b4e63d-8c6c-11ea-8045-0242ac110017 to disappear May 2 11:56:54.839: INFO: Pod pod-projected-secrets-00b4e63d-8c6c-11ea-8045-0242ac110017 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 2 11:56:54.839: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-95mnk" for this suite. May 2 11:57:00.854: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 11:57:00.879: INFO: namespace: e2e-tests-projected-95mnk, resource: bindings, ignored listing per whitelist May 2 11:57:00.955: INFO: namespace e2e-tests-projected-95mnk deletion completed in 6.113711976s • [SLOW TEST:12.688 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 2 11:57:00.955: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43 [It] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod May 2 11:57:01.064: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 2 11:57:09.475: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-init-container-c9r82" for this suite. May 2 11:57:31.558: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 11:57:31.619: INFO: namespace: e2e-tests-init-container-c9r82, resource: bindings, ignored listing per whitelist May 2 11:57:31.626: INFO: namespace e2e-tests-init-container-c9r82 deletion completed in 22.118994984s • [SLOW TEST:30.671 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 2 11:57:31.626: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts STEP: Waiting for a default service account to be provisioned in namespace [It] should test kubelet managed /etc/hosts file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Setting up the test STEP: Creating hostNetwork=false pod STEP: Creating hostNetwork=true pod STEP: Running the test STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false May 2 11:57:41.848: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-frbtl PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 2 11:57:41.848: INFO: >>> kubeConfig: /root/.kube/config I0502 11:57:41.879275 6 log.go:172] (0xc0023542c0) (0xc000c9b540) Create stream I0502 11:57:41.879305 6 log.go:172] (0xc0023542c0) (0xc000c9b540) Stream added, broadcasting: 1 I0502 11:57:41.881322 6 log.go:172] (0xc0023542c0) Reply frame received for 1 I0502 11:57:41.881356 6 log.go:172] (0xc0023542c0) (0xc0023e7860) Create stream I0502 11:57:41.881367 6 log.go:172] (0xc0023542c0) (0xc0023e7860) Stream added, broadcasting: 3 I0502 11:57:41.882278 6 log.go:172] (0xc0023542c0) Reply frame received for 3 I0502 11:57:41.882312 6 log.go:172] (0xc0023542c0) (0xc0022f1540) Create stream I0502 11:57:41.882326 6 log.go:172] (0xc0023542c0) (0xc0022f1540) Stream added, broadcasting: 5 I0502 11:57:41.883363 6 log.go:172] (0xc0023542c0) Reply frame received for 5 I0502 11:57:41.974874 6 log.go:172] (0xc0023542c0) Data frame received for 5 I0502 11:57:41.974898 6 log.go:172] (0xc0022f1540) (5) Data frame handling I0502 11:57:41.974915 6 log.go:172] (0xc0023542c0) Data frame received for 3 I0502 11:57:41.974925 6 log.go:172] (0xc0023e7860) (3) Data frame handling I0502 11:57:41.974938 6 log.go:172] (0xc0023e7860) (3) Data frame sent I0502 11:57:41.974946 6 log.go:172] (0xc0023542c0) Data frame received for 3 I0502 11:57:41.974951 6 log.go:172] (0xc0023e7860) (3) Data frame handling I0502 11:57:41.976393 6 log.go:172] (0xc0023542c0) Data frame received for 1 I0502 11:57:41.976406 6 log.go:172] (0xc000c9b540) (1) Data frame handling I0502 11:57:41.976424 6 log.go:172] (0xc000c9b540) (1) Data frame sent I0502 11:57:41.976541 6 log.go:172] (0xc0023542c0) (0xc000c9b540) Stream removed, broadcasting: 1 I0502 11:57:41.976621 6 log.go:172] (0xc0023542c0) (0xc000c9b540) Stream removed, broadcasting: 1 I0502 11:57:41.976657 6 log.go:172] (0xc0023542c0) (0xc0023e7860) Stream removed, broadcasting: 3 I0502 11:57:41.976755 6 log.go:172] (0xc0023542c0) Go away received I0502 11:57:41.976792 6 log.go:172] (0xc0023542c0) (0xc0022f1540) Stream removed, broadcasting: 5 May 2 11:57:41.976: INFO: Exec stderr: "" May 2 11:57:41.976: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-frbtl PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 2 11:57:41.976: INFO: >>> kubeConfig: /root/.kube/config I0502 11:57:41.999581 6 log.go:172] (0xc00032fe40) (0xc0023e7ae0) Create stream I0502 11:57:41.999611 6 log.go:172] (0xc00032fe40) (0xc0023e7ae0) Stream added, broadcasting: 1 I0502 11:57:42.001341 6 log.go:172] (0xc00032fe40) Reply frame received for 1 I0502 11:57:42.001388 6 log.go:172] (0xc00032fe40) (0xc0022f15e0) Create stream I0502 11:57:42.001402 6 log.go:172] (0xc00032fe40) (0xc0022f15e0) Stream added, broadcasting: 3 I0502 11:57:42.002133 6 log.go:172] (0xc00032fe40) Reply frame received for 3 I0502 11:57:42.002160 6 log.go:172] (0xc00032fe40) (0xc0022f1680) Create stream I0502 11:57:42.002170 6 log.go:172] (0xc00032fe40) (0xc0022f1680) Stream added, broadcasting: 5 I0502 11:57:42.002866 6 log.go:172] (0xc00032fe40) Reply frame received for 5 I0502 11:57:42.082470 6 log.go:172] (0xc00032fe40) Data frame received for 3 I0502 11:57:42.082509 6 log.go:172] (0xc0022f15e0) (3) Data frame handling I0502 11:57:42.082526 6 log.go:172] (0xc0022f15e0) (3) Data frame sent I0502 11:57:42.082575 6 log.go:172] (0xc00032fe40) Data frame received for 3 I0502 11:57:42.082595 6 log.go:172] (0xc0022f15e0) (3) Data frame handling I0502 11:57:42.082621 6 log.go:172] (0xc00032fe40) Data frame received for 5 I0502 11:57:42.082639 6 log.go:172] (0xc0022f1680) (5) Data frame handling I0502 11:57:42.084241 6 log.go:172] (0xc00032fe40) Data frame received for 1 I0502 11:57:42.084277 6 log.go:172] (0xc0023e7ae0) (1) Data frame handling I0502 11:57:42.084294 6 log.go:172] (0xc0023e7ae0) (1) Data frame sent I0502 11:57:42.084311 6 log.go:172] (0xc00032fe40) (0xc0023e7ae0) Stream removed, broadcasting: 1 I0502 11:57:42.084330 6 log.go:172] (0xc00032fe40) Go away received I0502 11:57:42.084632 6 log.go:172] (0xc00032fe40) (0xc0023e7ae0) Stream removed, broadcasting: 1 I0502 11:57:42.084653 6 log.go:172] (0xc00032fe40) (0xc0022f15e0) Stream removed, broadcasting: 3 I0502 11:57:42.084665 6 log.go:172] (0xc00032fe40) (0xc0022f1680) Stream removed, broadcasting: 5 May 2 11:57:42.084: INFO: Exec stderr: "" May 2 11:57:42.084: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-frbtl PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 2 11:57:42.084: INFO: >>> kubeConfig: /root/.kube/config I0502 11:57:42.115358 6 log.go:172] (0xc001a2a2c0) (0xc0022f1900) Create stream I0502 11:57:42.115394 6 log.go:172] (0xc001a2a2c0) (0xc0022f1900) Stream added, broadcasting: 1 I0502 11:57:42.117822 6 log.go:172] (0xc001a2a2c0) Reply frame received for 1 I0502 11:57:42.117848 6 log.go:172] (0xc001a2a2c0) (0xc0022f19a0) Create stream I0502 11:57:42.117856 6 log.go:172] (0xc001a2a2c0) (0xc0022f19a0) Stream added, broadcasting: 3 I0502 11:57:42.118504 6 log.go:172] (0xc001a2a2c0) Reply frame received for 3 I0502 11:57:42.118535 6 log.go:172] (0xc001a2a2c0) (0xc002364a00) Create stream I0502 11:57:42.118546 6 log.go:172] (0xc001a2a2c0) (0xc002364a00) Stream added, broadcasting: 5 I0502 11:57:42.119163 6 log.go:172] (0xc001a2a2c0) Reply frame received for 5 I0502 11:57:42.175735 6 log.go:172] (0xc001a2a2c0) Data frame received for 3 I0502 11:57:42.175862 6 log.go:172] (0xc0022f19a0) (3) Data frame handling I0502 11:57:42.175888 6 log.go:172] (0xc0022f19a0) (3) Data frame sent I0502 11:57:42.175898 6 log.go:172] (0xc001a2a2c0) Data frame received for 3 I0502 11:57:42.175911 6 log.go:172] (0xc0022f19a0) (3) Data frame handling I0502 11:57:42.175943 6 log.go:172] (0xc001a2a2c0) Data frame received for 5 I0502 11:57:42.175972 6 log.go:172] (0xc002364a00) (5) Data frame handling I0502 11:57:42.177938 6 log.go:172] (0xc001a2a2c0) Data frame received for 1 I0502 11:57:42.177974 6 log.go:172] (0xc0022f1900) (1) Data frame handling I0502 11:57:42.177997 6 log.go:172] (0xc0022f1900) (1) Data frame sent I0502 11:57:42.178015 6 log.go:172] (0xc001a2a2c0) (0xc0022f1900) Stream removed, broadcasting: 1 I0502 11:57:42.178035 6 log.go:172] (0xc001a2a2c0) Go away received I0502 11:57:42.178206 6 log.go:172] (0xc001a2a2c0) (0xc0022f1900) Stream removed, broadcasting: 1 I0502 11:57:42.178233 6 log.go:172] (0xc001a2a2c0) (0xc0022f19a0) Stream removed, broadcasting: 3 I0502 11:57:42.178245 6 log.go:172] (0xc001a2a2c0) (0xc002364a00) Stream removed, broadcasting: 5 May 2 11:57:42.178: INFO: Exec stderr: "" May 2 11:57:42.178: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-frbtl PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 2 11:57:42.178: INFO: >>> kubeConfig: /root/.kube/config I0502 11:57:42.209733 6 log.go:172] (0xc001a2a790) (0xc0022f1d60) Create stream I0502 11:57:42.209765 6 log.go:172] (0xc001a2a790) (0xc0022f1d60) Stream added, broadcasting: 1 I0502 11:57:42.214872 6 log.go:172] (0xc001a2a790) Reply frame received for 1 I0502 11:57:42.214939 6 log.go:172] (0xc001a2a790) (0xc0014a2000) Create stream I0502 11:57:42.214953 6 log.go:172] (0xc001a2a790) (0xc0014a2000) Stream added, broadcasting: 3 I0502 11:57:42.216218 6 log.go:172] (0xc001a2a790) Reply frame received for 3 I0502 11:57:42.216303 6 log.go:172] (0xc001a2a790) (0xc0014a20a0) Create stream I0502 11:57:42.216318 6 log.go:172] (0xc001a2a790) (0xc0014a20a0) Stream added, broadcasting: 5 I0502 11:57:42.217422 6 log.go:172] (0xc001a2a790) Reply frame received for 5 I0502 11:57:42.276565 6 log.go:172] (0xc001a2a790) Data frame received for 3 I0502 11:57:42.276638 6 log.go:172] (0xc0014a2000) (3) Data frame handling I0502 11:57:42.276665 6 log.go:172] (0xc0014a2000) (3) Data frame sent I0502 11:57:42.276685 6 log.go:172] (0xc001a2a790) Data frame received for 3 I0502 11:57:42.276703 6 log.go:172] (0xc0014a2000) (3) Data frame handling I0502 11:57:42.276727 6 log.go:172] (0xc001a2a790) Data frame received for 5 I0502 11:57:42.276745 6 log.go:172] (0xc0014a20a0) (5) Data frame handling I0502 11:57:42.278433 6 log.go:172] (0xc001a2a790) Data frame received for 1 I0502 11:57:42.278467 6 log.go:172] (0xc0022f1d60) (1) Data frame handling I0502 11:57:42.278490 6 log.go:172] (0xc0022f1d60) (1) Data frame sent I0502 11:57:42.278504 6 log.go:172] (0xc001a2a790) (0xc0022f1d60) Stream removed, broadcasting: 1 I0502 11:57:42.278519 6 log.go:172] (0xc001a2a790) Go away received I0502 11:57:42.278670 6 log.go:172] (0xc001a2a790) (0xc0022f1d60) Stream removed, broadcasting: 1 I0502 11:57:42.278696 6 log.go:172] (0xc001a2a790) (0xc0014a2000) Stream removed, broadcasting: 3 I0502 11:57:42.278709 6 log.go:172] (0xc001a2a790) (0xc0014a20a0) Stream removed, broadcasting: 5 May 2 11:57:42.278: INFO: Exec stderr: "" STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount May 2 11:57:42.278: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-frbtl PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 2 11:57:42.278: INFO: >>> kubeConfig: /root/.kube/config I0502 11:57:42.312468 6 log.go:172] (0xc00032fce0) (0xc001a88320) Create stream I0502 11:57:42.312506 6 log.go:172] (0xc00032fce0) (0xc001a88320) Stream added, broadcasting: 1 I0502 11:57:42.315058 6 log.go:172] (0xc00032fce0) Reply frame received for 1 I0502 11:57:42.315113 6 log.go:172] (0xc00032fce0) (0xc0014a2140) Create stream I0502 11:57:42.315126 6 log.go:172] (0xc00032fce0) (0xc0014a2140) Stream added, broadcasting: 3 I0502 11:57:42.316187 6 log.go:172] (0xc00032fce0) Reply frame received for 3 I0502 11:57:42.316249 6 log.go:172] (0xc00032fce0) (0xc001026000) Create stream I0502 11:57:42.316278 6 log.go:172] (0xc00032fce0) (0xc001026000) Stream added, broadcasting: 5 I0502 11:57:42.317024 6 log.go:172] (0xc00032fce0) Reply frame received for 5 I0502 11:57:42.380372 6 log.go:172] (0xc00032fce0) Data frame received for 3 I0502 11:57:42.380426 6 log.go:172] (0xc0014a2140) (3) Data frame handling I0502 11:57:42.380439 6 log.go:172] (0xc0014a2140) (3) Data frame sent I0502 11:57:42.380450 6 log.go:172] (0xc00032fce0) Data frame received for 3 I0502 11:57:42.380457 6 log.go:172] (0xc0014a2140) (3) Data frame handling I0502 11:57:42.380504 6 log.go:172] (0xc00032fce0) Data frame received for 5 I0502 11:57:42.380526 6 log.go:172] (0xc001026000) (5) Data frame handling I0502 11:57:42.382125 6 log.go:172] (0xc00032fce0) Data frame received for 1 I0502 11:57:42.382140 6 log.go:172] (0xc001a88320) (1) Data frame handling I0502 11:57:42.382148 6 log.go:172] (0xc001a88320) (1) Data frame sent I0502 11:57:42.382162 6 log.go:172] (0xc00032fce0) (0xc001a88320) Stream removed, broadcasting: 1 I0502 11:57:42.382214 6 log.go:172] (0xc00032fce0) Go away received I0502 11:57:42.382254 6 log.go:172] (0xc00032fce0) (0xc001a88320) Stream removed, broadcasting: 1 I0502 11:57:42.382271 6 log.go:172] (0xc00032fce0) (0xc0014a2140) Stream removed, broadcasting: 3 I0502 11:57:42.382284 6 log.go:172] (0xc00032fce0) (0xc001026000) Stream removed, broadcasting: 5 May 2 11:57:42.382: INFO: Exec stderr: "" May 2 11:57:42.382: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-frbtl PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 2 11:57:42.382: INFO: >>> kubeConfig: /root/.kube/config I0502 11:57:42.415509 6 log.go:172] (0xc001a2a210) (0xc001a885a0) Create stream I0502 11:57:42.415542 6 log.go:172] (0xc001a2a210) (0xc001a885a0) Stream added, broadcasting: 1 I0502 11:57:42.417653 6 log.go:172] (0xc001a2a210) Reply frame received for 1 I0502 11:57:42.417698 6 log.go:172] (0xc001a2a210) (0xc00209a000) Create stream I0502 11:57:42.417717 6 log.go:172] (0xc001a2a210) (0xc00209a000) Stream added, broadcasting: 3 I0502 11:57:42.418611 6 log.go:172] (0xc001a2a210) Reply frame received for 3 I0502 11:57:42.418658 6 log.go:172] (0xc001a2a210) (0xc001fce000) Create stream I0502 11:57:42.418676 6 log.go:172] (0xc001a2a210) (0xc001fce000) Stream added, broadcasting: 5 I0502 11:57:42.419636 6 log.go:172] (0xc001a2a210) Reply frame received for 5 I0502 11:57:42.480797 6 log.go:172] (0xc001a2a210) Data frame received for 5 I0502 11:57:42.480830 6 log.go:172] (0xc001a2a210) Data frame received for 3 I0502 11:57:42.480853 6 log.go:172] (0xc00209a000) (3) Data frame handling I0502 11:57:42.480864 6 log.go:172] (0xc00209a000) (3) Data frame sent I0502 11:57:42.480870 6 log.go:172] (0xc001a2a210) Data frame received for 3 I0502 11:57:42.480883 6 log.go:172] (0xc00209a000) (3) Data frame handling I0502 11:57:42.480915 6 log.go:172] (0xc001fce000) (5) Data frame handling I0502 11:57:42.482742 6 log.go:172] (0xc001a2a210) Data frame received for 1 I0502 11:57:42.482768 6 log.go:172] (0xc001a885a0) (1) Data frame handling I0502 11:57:42.482780 6 log.go:172] (0xc001a885a0) (1) Data frame sent I0502 11:57:42.482789 6 log.go:172] (0xc001a2a210) (0xc001a885a0) Stream removed, broadcasting: 1 I0502 11:57:42.482904 6 log.go:172] (0xc001a2a210) (0xc001a885a0) Stream removed, broadcasting: 1 I0502 11:57:42.482921 6 log.go:172] (0xc001a2a210) (0xc00209a000) Stream removed, broadcasting: 3 I0502 11:57:42.483001 6 log.go:172] (0xc001a2a210) Go away received I0502 11:57:42.483105 6 log.go:172] (0xc001a2a210) (0xc001fce000) Stream removed, broadcasting: 5 May 2 11:57:42.483: INFO: Exec stderr: "" STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true May 2 11:57:42.483: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-frbtl PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 2 11:57:42.483: INFO: >>> kubeConfig: /root/.kube/config I0502 11:57:42.512699 6 log.go:172] (0xc000ae9600) (0xc00209a3c0) Create stream I0502 11:57:42.512727 6 log.go:172] (0xc000ae9600) (0xc00209a3c0) Stream added, broadcasting: 1 I0502 11:57:42.515470 6 log.go:172] (0xc000ae9600) Reply frame received for 1 I0502 11:57:42.515532 6 log.go:172] (0xc000ae9600) (0xc00209a500) Create stream I0502 11:57:42.515556 6 log.go:172] (0xc000ae9600) (0xc00209a500) Stream added, broadcasting: 3 I0502 11:57:42.516594 6 log.go:172] (0xc000ae9600) Reply frame received for 3 I0502 11:57:42.516639 6 log.go:172] (0xc000ae9600) (0xc0014a21e0) Create stream I0502 11:57:42.516665 6 log.go:172] (0xc000ae9600) (0xc0014a21e0) Stream added, broadcasting: 5 I0502 11:57:42.518273 6 log.go:172] (0xc000ae9600) Reply frame received for 5 I0502 11:57:42.567292 6 log.go:172] (0xc000ae9600) Data frame received for 3 I0502 11:57:42.567333 6 log.go:172] (0xc00209a500) (3) Data frame handling I0502 11:57:42.567353 6 log.go:172] (0xc00209a500) (3) Data frame sent I0502 11:57:42.567366 6 log.go:172] (0xc000ae9600) Data frame received for 3 I0502 11:57:42.567379 6 log.go:172] (0xc00209a500) (3) Data frame handling I0502 11:57:42.567428 6 log.go:172] (0xc000ae9600) Data frame received for 5 I0502 11:57:42.567461 6 log.go:172] (0xc0014a21e0) (5) Data frame handling I0502 11:57:42.568760 6 log.go:172] (0xc000ae9600) Data frame received for 1 I0502 11:57:42.568795 6 log.go:172] (0xc00209a3c0) (1) Data frame handling I0502 11:57:42.568827 6 log.go:172] (0xc00209a3c0) (1) Data frame sent I0502 11:57:42.568852 6 log.go:172] (0xc000ae9600) (0xc00209a3c0) Stream removed, broadcasting: 1 I0502 11:57:42.569005 6 log.go:172] (0xc000ae9600) (0xc00209a3c0) Stream removed, broadcasting: 1 I0502 11:57:42.569050 6 log.go:172] (0xc000ae9600) (0xc00209a500) Stream removed, broadcasting: 3 I0502 11:57:42.569076 6 log.go:172] (0xc000ae9600) (0xc0014a21e0) Stream removed, broadcasting: 5 May 2 11:57:42.569: INFO: Exec stderr: "" May 2 11:57:42.569: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-frbtl PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 2 11:57:42.569: INFO: >>> kubeConfig: /root/.kube/config I0502 11:57:42.569704 6 log.go:172] (0xc000ae9600) Go away received I0502 11:57:42.601336 6 log.go:172] (0xc00291a2c0) (0xc0014a2500) Create stream I0502 11:57:42.601362 6 log.go:172] (0xc00291a2c0) (0xc0014a2500) Stream added, broadcasting: 1 I0502 11:57:42.603142 6 log.go:172] (0xc00291a2c0) Reply frame received for 1 I0502 11:57:42.603183 6 log.go:172] (0xc00291a2c0) (0xc001fce140) Create stream I0502 11:57:42.603213 6 log.go:172] (0xc00291a2c0) (0xc001fce140) Stream added, broadcasting: 3 I0502 11:57:42.604348 6 log.go:172] (0xc00291a2c0) Reply frame received for 3 I0502 11:57:42.604378 6 log.go:172] (0xc00291a2c0) (0xc001fce1e0) Create stream I0502 11:57:42.604386 6 log.go:172] (0xc00291a2c0) (0xc001fce1e0) Stream added, broadcasting: 5 I0502 11:57:42.605793 6 log.go:172] (0xc00291a2c0) Reply frame received for 5 I0502 11:57:42.674094 6 log.go:172] (0xc00291a2c0) Data frame received for 5 I0502 11:57:42.674170 6 log.go:172] (0xc001fce1e0) (5) Data frame handling I0502 11:57:42.674220 6 log.go:172] (0xc00291a2c0) Data frame received for 3 I0502 11:57:42.674250 6 log.go:172] (0xc001fce140) (3) Data frame handling I0502 11:57:42.674284 6 log.go:172] (0xc001fce140) (3) Data frame sent I0502 11:57:42.674307 6 log.go:172] (0xc00291a2c0) Data frame received for 3 I0502 11:57:42.674328 6 log.go:172] (0xc001fce140) (3) Data frame handling I0502 11:57:42.675998 6 log.go:172] (0xc00291a2c0) Data frame received for 1 I0502 11:57:42.676037 6 log.go:172] (0xc0014a2500) (1) Data frame handling I0502 11:57:42.676080 6 log.go:172] (0xc0014a2500) (1) Data frame sent I0502 11:57:42.676097 6 log.go:172] (0xc00291a2c0) (0xc0014a2500) Stream removed, broadcasting: 1 I0502 11:57:42.676196 6 log.go:172] (0xc00291a2c0) (0xc0014a2500) Stream removed, broadcasting: 1 I0502 11:57:42.676214 6 log.go:172] (0xc00291a2c0) (0xc001fce140) Stream removed, broadcasting: 3 I0502 11:57:42.676234 6 log.go:172] (0xc00291a2c0) (0xc001fce1e0) Stream removed, broadcasting: 5 May 2 11:57:42.676: INFO: Exec stderr: "" May 2 11:57:42.676: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-frbtl PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 2 11:57:42.676: INFO: >>> kubeConfig: /root/.kube/config I0502 11:57:42.678499 6 log.go:172] (0xc00291a2c0) Go away received I0502 11:57:42.708607 6 log.go:172] (0xc001a2a580) (0xc001a88780) Create stream I0502 11:57:42.708642 6 log.go:172] (0xc001a2a580) (0xc001a88780) Stream added, broadcasting: 1 I0502 11:57:42.711455 6 log.go:172] (0xc001a2a580) Reply frame received for 1 I0502 11:57:42.711493 6 log.go:172] (0xc001a2a580) (0xc001a888c0) Create stream I0502 11:57:42.711503 6 log.go:172] (0xc001a2a580) (0xc001a888c0) Stream added, broadcasting: 3 I0502 11:57:42.712202 6 log.go:172] (0xc001a2a580) Reply frame received for 3 I0502 11:57:42.712238 6 log.go:172] (0xc001a2a580) (0xc0010260a0) Create stream I0502 11:57:42.712252 6 log.go:172] (0xc001a2a580) (0xc0010260a0) Stream added, broadcasting: 5 I0502 11:57:42.712891 6 log.go:172] (0xc001a2a580) Reply frame received for 5 I0502 11:57:42.777378 6 log.go:172] (0xc001a2a580) Data frame received for 3 I0502 11:57:42.777428 6 log.go:172] (0xc001a888c0) (3) Data frame handling I0502 11:57:42.777448 6 log.go:172] (0xc001a888c0) (3) Data frame sent I0502 11:57:42.777462 6 log.go:172] (0xc001a2a580) Data frame received for 3 I0502 11:57:42.777482 6 log.go:172] (0xc001a888c0) (3) Data frame handling I0502 11:57:42.777509 6 log.go:172] (0xc001a2a580) Data frame received for 5 I0502 11:57:42.777534 6 log.go:172] (0xc0010260a0) (5) Data frame handling I0502 11:57:42.779180 6 log.go:172] (0xc001a2a580) Data frame received for 1 I0502 11:57:42.779205 6 log.go:172] (0xc001a88780) (1) Data frame handling I0502 11:57:42.779227 6 log.go:172] (0xc001a88780) (1) Data frame sent I0502 11:57:42.779254 6 log.go:172] (0xc001a2a580) (0xc001a88780) Stream removed, broadcasting: 1 I0502 11:57:42.779279 6 log.go:172] (0xc001a2a580) Go away received I0502 11:57:42.779409 6 log.go:172] (0xc001a2a580) (0xc001a88780) Stream removed, broadcasting: 1 I0502 11:57:42.779439 6 log.go:172] (0xc001a2a580) (0xc001a888c0) Stream removed, broadcasting: 3 I0502 11:57:42.779459 6 log.go:172] (0xc001a2a580) (0xc0010260a0) Stream removed, broadcasting: 5 May 2 11:57:42.779: INFO: Exec stderr: "" May 2 11:57:42.779: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-frbtl PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 2 11:57:42.779: INFO: >>> kubeConfig: /root/.kube/config I0502 11:57:42.810886 6 log.go:172] (0xc00291a790) (0xc0014a2780) Create stream I0502 11:57:42.810921 6 log.go:172] (0xc00291a790) (0xc0014a2780) Stream added, broadcasting: 1 I0502 11:57:42.813536 6 log.go:172] (0xc00291a790) Reply frame received for 1 I0502 11:57:42.813581 6 log.go:172] (0xc00291a790) (0xc0014a2820) Create stream I0502 11:57:42.813600 6 log.go:172] (0xc00291a790) (0xc0014a2820) Stream added, broadcasting: 3 I0502 11:57:42.814668 6 log.go:172] (0xc00291a790) Reply frame received for 3 I0502 11:57:42.814704 6 log.go:172] (0xc00291a790) (0xc0014a28c0) Create stream I0502 11:57:42.814718 6 log.go:172] (0xc00291a790) (0xc0014a28c0) Stream added, broadcasting: 5 I0502 11:57:42.815571 6 log.go:172] (0xc00291a790) Reply frame received for 5 I0502 11:57:42.886042 6 log.go:172] (0xc00291a790) Data frame received for 5 I0502 11:57:42.886094 6 log.go:172] (0xc0014a28c0) (5) Data frame handling I0502 11:57:42.886127 6 log.go:172] (0xc00291a790) Data frame received for 3 I0502 11:57:42.886142 6 log.go:172] (0xc0014a2820) (3) Data frame handling I0502 11:57:42.886161 6 log.go:172] (0xc0014a2820) (3) Data frame sent I0502 11:57:42.886172 6 log.go:172] (0xc00291a790) Data frame received for 3 I0502 11:57:42.886199 6 log.go:172] (0xc0014a2820) (3) Data frame handling I0502 11:57:42.887357 6 log.go:172] (0xc00291a790) Data frame received for 1 I0502 11:57:42.887377 6 log.go:172] (0xc0014a2780) (1) Data frame handling I0502 11:57:42.887394 6 log.go:172] (0xc0014a2780) (1) Data frame sent I0502 11:57:42.887440 6 log.go:172] (0xc00291a790) (0xc0014a2780) Stream removed, broadcasting: 1 I0502 11:57:42.887469 6 log.go:172] (0xc00291a790) Go away received I0502 11:57:42.887549 6 log.go:172] (0xc00291a790) (0xc0014a2780) Stream removed, broadcasting: 1 I0502 11:57:42.887571 6 log.go:172] (0xc00291a790) (0xc0014a2820) Stream removed, broadcasting: 3 I0502 11:57:42.887581 6 log.go:172] (0xc00291a790) (0xc0014a28c0) Stream removed, broadcasting: 5 May 2 11:57:42.887: INFO: Exec stderr: "" [AfterEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 2 11:57:42.887: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-e2e-kubelet-etc-hosts-frbtl" for this suite. May 2 11:58:22.906: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 11:58:22.926: INFO: namespace: e2e-tests-e2e-kubelet-etc-hosts-frbtl, resource: bindings, ignored listing per whitelist May 2 11:58:22.978: INFO: namespace e2e-tests-e2e-kubelet-etc-hosts-frbtl deletion completed in 40.086188631s • [SLOW TEST:51.352 seconds] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should test kubelet managed /etc/hosts file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [k8s.io] Pods should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 2 11:58:22.978: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod May 2 11:58:27.694: INFO: Successfully updated pod "pod-update-3921e549-8c6c-11ea-8045-0242ac110017" STEP: verifying the updated pod is in kubernetes May 2 11:58:27.699: INFO: Pod update OK [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 2 11:58:27.699: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-kt7kl" for this suite. May 2 11:58:49.735: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 11:58:49.818: INFO: namespace: e2e-tests-pods-kt7kl, resource: bindings, ignored listing per whitelist May 2 11:58:49.832: INFO: namespace e2e-tests-pods-kt7kl deletion completed in 22.108958525s • [SLOW TEST:26.854 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 2 11:58:49.832: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not cause race condition when used for configmaps [Serial] [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating 50 configmaps STEP: Creating RC which spawns configmap-volume pods May 2 11:58:50.484: INFO: Pod name wrapped-volume-race-496c9191-8c6c-11ea-8045-0242ac110017: Found 0 pods out of 5 May 2 11:58:55.493: INFO: Pod name wrapped-volume-race-496c9191-8c6c-11ea-8045-0242ac110017: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-496c9191-8c6c-11ea-8045-0242ac110017 in namespace e2e-tests-emptydir-wrapper-jwmzc, will wait for the garbage collector to delete the pods May 2 12:00:37.570: INFO: Deleting ReplicationController wrapped-volume-race-496c9191-8c6c-11ea-8045-0242ac110017 took: 6.493952ms May 2 12:00:37.671: INFO: Terminating ReplicationController wrapped-volume-race-496c9191-8c6c-11ea-8045-0242ac110017 pods took: 100.231947ms STEP: Creating RC which spawns configmap-volume pods May 2 12:01:21.405: INFO: Pod name wrapped-volume-race-a3626ae2-8c6c-11ea-8045-0242ac110017: Found 0 pods out of 5 May 2 12:01:26.412: INFO: Pod name wrapped-volume-race-a3626ae2-8c6c-11ea-8045-0242ac110017: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-a3626ae2-8c6c-11ea-8045-0242ac110017 in namespace e2e-tests-emptydir-wrapper-jwmzc, will wait for the garbage collector to delete the pods May 2 12:03:50.497: INFO: Deleting ReplicationController wrapped-volume-race-a3626ae2-8c6c-11ea-8045-0242ac110017 took: 8.490853ms May 2 12:03:50.697: INFO: Terminating ReplicationController wrapped-volume-race-a3626ae2-8c6c-11ea-8045-0242ac110017 pods took: 200.353584ms STEP: Creating RC which spawns configmap-volume pods May 2 12:04:38.051: INFO: Pod name wrapped-volume-race-1885fc39-8c6d-11ea-8045-0242ac110017: Found 0 pods out of 5 May 2 12:04:45.539: INFO: Pod name wrapped-volume-race-1885fc39-8c6d-11ea-8045-0242ac110017: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-1885fc39-8c6d-11ea-8045-0242ac110017 in namespace e2e-tests-emptydir-wrapper-jwmzc, will wait for the garbage collector to delete the pods May 2 12:06:47.035: INFO: Deleting ReplicationController wrapped-volume-race-1885fc39-8c6d-11ea-8045-0242ac110017 took: 6.912665ms May 2 12:06:47.135: INFO: Terminating ReplicationController wrapped-volume-race-1885fc39-8c6d-11ea-8045-0242ac110017 pods took: 100.194432ms STEP: Cleaning up the configMaps [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 2 12:07:32.859: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-wrapper-jwmzc" for this suite. May 2 12:07:40.874: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 12:07:40.931: INFO: namespace: e2e-tests-emptydir-wrapper-jwmzc, resource: bindings, ignored listing per whitelist May 2 12:07:40.952: INFO: namespace e2e-tests-emptydir-wrapper-jwmzc deletion completed in 8.087596144s • [SLOW TEST:531.119 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 should not cause race condition when used for configmaps [Serial] [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 2 12:07:40.952: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the rc1 STEP: create the rc2 STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well STEP: delete the rc simpletest-rc-to-be-deleted STEP: wait for the rc to be deleted STEP: Gathering metrics W0502 12:07:53.296672 6 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 2 12:07:53.296: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 2 12:07:53.296: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-wjbhb" for this suite. May 2 12:08:01.788: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 12:08:01.840: INFO: namespace: e2e-tests-gc-wjbhb, resource: bindings, ignored listing per whitelist May 2 12:08:01.859: INFO: namespace e2e-tests-gc-wjbhb deletion completed in 8.385022037s • [SLOW TEST:20.907 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 2 12:08:01.860: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap e2e-tests-configmap-qr5hk/configmap-test-922d32b0-8c6d-11ea-8045-0242ac110017 STEP: Creating a pod to test consume configMaps May 2 12:08:02.040: INFO: Waiting up to 5m0s for pod "pod-configmaps-922ec928-8c6d-11ea-8045-0242ac110017" in namespace "e2e-tests-configmap-qr5hk" to be "success or failure" May 2 12:08:02.050: INFO: Pod "pod-configmaps-922ec928-8c6d-11ea-8045-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 10.23677ms May 2 12:08:04.054: INFO: Pod "pod-configmaps-922ec928-8c6d-11ea-8045-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014575489s May 2 12:08:06.058: INFO: Pod "pod-configmaps-922ec928-8c6d-11ea-8045-0242ac110017": Phase="Running", Reason="", readiness=true. Elapsed: 4.018187543s May 2 12:08:08.061: INFO: Pod "pod-configmaps-922ec928-8c6d-11ea-8045-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.021731245s STEP: Saw pod success May 2 12:08:08.061: INFO: Pod "pod-configmaps-922ec928-8c6d-11ea-8045-0242ac110017" satisfied condition "success or failure" May 2 12:08:08.063: INFO: Trying to get logs from node hunter-worker pod pod-configmaps-922ec928-8c6d-11ea-8045-0242ac110017 container env-test: STEP: delete the pod May 2 12:08:08.100: INFO: Waiting for pod pod-configmaps-922ec928-8c6d-11ea-8045-0242ac110017 to disappear May 2 12:08:08.116: INFO: Pod pod-configmaps-922ec928-8c6d-11ea-8045-0242ac110017 no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 2 12:08:08.116: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-qr5hk" for this suite. May 2 12:08:14.130: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 12:08:14.157: INFO: namespace: e2e-tests-configmap-qr5hk, resource: bindings, ignored listing per whitelist May 2 12:08:14.190: INFO: namespace e2e-tests-configmap-qr5hk deletion completed in 6.071708973s • [SLOW TEST:12.331 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 2 12:08:14.191: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with secret pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod pod-subpath-test-secret-lq8l STEP: Creating a pod to test atomic-volume-subpath May 2 12:08:14.406: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-lq8l" in namespace "e2e-tests-subpath-qfwfb" to be "success or failure" May 2 12:08:14.427: INFO: Pod "pod-subpath-test-secret-lq8l": Phase="Pending", Reason="", readiness=false. Elapsed: 21.280601ms May 2 12:08:16.432: INFO: Pod "pod-subpath-test-secret-lq8l": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026229957s May 2 12:08:18.569: INFO: Pod "pod-subpath-test-secret-lq8l": Phase="Pending", Reason="", readiness=false. Elapsed: 4.162948304s May 2 12:08:20.627: INFO: Pod "pod-subpath-test-secret-lq8l": Phase="Pending", Reason="", readiness=false. Elapsed: 6.22050131s May 2 12:08:22.631: INFO: Pod "pod-subpath-test-secret-lq8l": Phase="Pending", Reason="", readiness=false. Elapsed: 8.224860863s May 2 12:08:24.635: INFO: Pod "pod-subpath-test-secret-lq8l": Phase="Running", Reason="", readiness=true. Elapsed: 10.228931887s May 2 12:08:26.638: INFO: Pod "pod-subpath-test-secret-lq8l": Phase="Running", Reason="", readiness=false. Elapsed: 12.232201695s May 2 12:08:28.641: INFO: Pod "pod-subpath-test-secret-lq8l": Phase="Running", Reason="", readiness=false. Elapsed: 14.234837209s May 2 12:08:30.645: INFO: Pod "pod-subpath-test-secret-lq8l": Phase="Running", Reason="", readiness=false. Elapsed: 16.23892959s May 2 12:08:32.649: INFO: Pod "pod-subpath-test-secret-lq8l": Phase="Running", Reason="", readiness=false. Elapsed: 18.242801666s May 2 12:08:34.652: INFO: Pod "pod-subpath-test-secret-lq8l": Phase="Running", Reason="", readiness=false. Elapsed: 20.246163973s May 2 12:08:36.656: INFO: Pod "pod-subpath-test-secret-lq8l": Phase="Running", Reason="", readiness=false. Elapsed: 22.249649436s May 2 12:08:38.659: INFO: Pod "pod-subpath-test-secret-lq8l": Phase="Running", Reason="", readiness=false. Elapsed: 24.253357308s May 2 12:08:40.664: INFO: Pod "pod-subpath-test-secret-lq8l": Phase="Running", Reason="", readiness=false. Elapsed: 26.257713209s May 2 12:08:42.690: INFO: Pod "pod-subpath-test-secret-lq8l": Phase="Running", Reason="", readiness=false. Elapsed: 28.283687255s May 2 12:08:44.695: INFO: Pod "pod-subpath-test-secret-lq8l": Phase="Succeeded", Reason="", readiness=false. Elapsed: 30.289229078s STEP: Saw pod success May 2 12:08:44.695: INFO: Pod "pod-subpath-test-secret-lq8l" satisfied condition "success or failure" May 2 12:08:44.699: INFO: Trying to get logs from node hunter-worker2 pod pod-subpath-test-secret-lq8l container test-container-subpath-secret-lq8l: STEP: delete the pod May 2 12:08:44.855: INFO: Waiting for pod pod-subpath-test-secret-lq8l to disappear May 2 12:08:44.907: INFO: Pod pod-subpath-test-secret-lq8l no longer exists STEP: Deleting pod pod-subpath-test-secret-lq8l May 2 12:08:44.907: INFO: Deleting pod "pod-subpath-test-secret-lq8l" in namespace "e2e-tests-subpath-qfwfb" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 2 12:08:44.909: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-subpath-qfwfb" for this suite. May 2 12:08:53.154: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 12:08:53.202: INFO: namespace: e2e-tests-subpath-qfwfb, resource: bindings, ignored listing per whitelist May 2 12:08:53.243: INFO: namespace e2e-tests-subpath-qfwfb deletion completed in 8.331112016s • [SLOW TEST:39.052 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with secret pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run job should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 2 12:08:53.243: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1454 [It] should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine May 2 12:08:53.360: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=e2e-tests-kubectl-5x8xb' May 2 12:08:55.742: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" May 2 12:08:55.742: INFO: stdout: "job.batch/e2e-test-nginx-job created\n" STEP: verifying the job e2e-test-nginx-job was created [AfterEach] [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1459 May 2 12:08:55.744: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete jobs e2e-test-nginx-job --namespace=e2e-tests-kubectl-5x8xb' May 2 12:08:55.853: INFO: stderr: "" May 2 12:08:55.853: INFO: stdout: "job.batch \"e2e-test-nginx-job\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 2 12:08:55.853: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-5x8xb" for this suite. May 2 12:09:17.917: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 12:09:17.956: INFO: namespace: e2e-tests-kubectl-5x8xb, resource: bindings, ignored listing per whitelist May 2 12:09:17.987: INFO: namespace e2e-tests-kubectl-5x8xb deletion completed in 22.130740286s • [SLOW TEST:24.744 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 2 12:09:17.987: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 2 12:09:24.219: INFO: Waiting up to 5m0s for pod "client-envvars-c32d4a28-8c6d-11ea-8045-0242ac110017" in namespace "e2e-tests-pods-wzb9h" to be "success or failure" May 2 12:09:24.225: INFO: Pod "client-envvars-c32d4a28-8c6d-11ea-8045-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 5.544767ms May 2 12:09:26.283: INFO: Pod "client-envvars-c32d4a28-8c6d-11ea-8045-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.063516456s May 2 12:09:28.313: INFO: Pod "client-envvars-c32d4a28-8c6d-11ea-8045-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.093394141s STEP: Saw pod success May 2 12:09:28.313: INFO: Pod "client-envvars-c32d4a28-8c6d-11ea-8045-0242ac110017" satisfied condition "success or failure" May 2 12:09:28.315: INFO: Trying to get logs from node hunter-worker2 pod client-envvars-c32d4a28-8c6d-11ea-8045-0242ac110017 container env3cont: STEP: delete the pod May 2 12:09:28.368: INFO: Waiting for pod client-envvars-c32d4a28-8c6d-11ea-8045-0242ac110017 to disappear May 2 12:09:28.375: INFO: Pod client-envvars-c32d4a28-8c6d-11ea-8045-0242ac110017 no longer exists [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 2 12:09:28.375: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-wzb9h" for this suite. May 2 12:10:14.390: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 12:10:14.452: INFO: namespace: e2e-tests-pods-wzb9h, resource: bindings, ignored listing per whitelist May 2 12:10:14.463: INFO: namespace e2e-tests-pods-wzb9h deletion completed in 46.084692534s • [SLOW TEST:56.476 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 2 12:10:14.463: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-projected-all-test-volume-e12f3841-8c6d-11ea-8045-0242ac110017 STEP: Creating secret with name secret-projected-all-test-volume-e12f3826-8c6d-11ea-8045-0242ac110017 STEP: Creating a pod to test Check all projections for projected volume plugin May 2 12:10:14.599: INFO: Waiting up to 5m0s for pod "projected-volume-e12f37d9-8c6d-11ea-8045-0242ac110017" in namespace "e2e-tests-projected-dwnlv" to be "success or failure" May 2 12:10:14.615: INFO: Pod "projected-volume-e12f37d9-8c6d-11ea-8045-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 16.583334ms May 2 12:10:16.823: INFO: Pod "projected-volume-e12f37d9-8c6d-11ea-8045-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.223993569s May 2 12:10:18.827: INFO: Pod "projected-volume-e12f37d9-8c6d-11ea-8045-0242ac110017": Phase="Running", Reason="", readiness=true. Elapsed: 4.22851544s May 2 12:10:20.832: INFO: Pod "projected-volume-e12f37d9-8c6d-11ea-8045-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.233001486s STEP: Saw pod success May 2 12:10:20.832: INFO: Pod "projected-volume-e12f37d9-8c6d-11ea-8045-0242ac110017" satisfied condition "success or failure" May 2 12:10:20.835: INFO: Trying to get logs from node hunter-worker2 pod projected-volume-e12f37d9-8c6d-11ea-8045-0242ac110017 container projected-all-volume-test: STEP: delete the pod May 2 12:10:20.869: INFO: Waiting for pod projected-volume-e12f37d9-8c6d-11ea-8045-0242ac110017 to disappear May 2 12:10:20.873: INFO: Pod projected-volume-e12f37d9-8c6d-11ea-8045-0242ac110017 no longer exists [AfterEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 2 12:10:20.873: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-dwnlv" for this suite. May 2 12:10:26.918: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 12:10:27.010: INFO: namespace: e2e-tests-projected-dwnlv, resource: bindings, ignored listing per whitelist May 2 12:10:27.026: INFO: namespace e2e-tests-projected-dwnlv deletion completed in 6.149549103s • [SLOW TEST:12.563 seconds] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_combined.go:31 should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 2 12:10:27.026: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for all rs to be garbage collected STEP: expected 0 rs, got 1 rs STEP: expected 0 pods, got 2 pods STEP: Gathering metrics W0502 12:10:28.185652 6 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 2 12:10:28.185: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 2 12:10:28.185: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-g628h" for this suite. May 2 12:10:34.356: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 12:10:34.435: INFO: namespace: e2e-tests-gc-g628h, resource: bindings, ignored listing per whitelist May 2 12:10:34.452: INFO: namespace e2e-tests-gc-g628h deletion completed in 6.263618515s • [SLOW TEST:7.426 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 2 12:10:34.452: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with downward pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod pod-subpath-test-downwardapi-l6ff STEP: Creating a pod to test atomic-volume-subpath May 2 12:10:34.594: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-l6ff" in namespace "e2e-tests-subpath-bw2hj" to be "success or failure" May 2 12:10:34.598: INFO: Pod "pod-subpath-test-downwardapi-l6ff": Phase="Pending", Reason="", readiness=false. Elapsed: 4.732572ms May 2 12:10:36.603: INFO: Pod "pod-subpath-test-downwardapi-l6ff": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009150794s May 2 12:10:38.607: INFO: Pod "pod-subpath-test-downwardapi-l6ff": Phase="Pending", Reason="", readiness=false. Elapsed: 4.013283476s May 2 12:10:40.611: INFO: Pod "pod-subpath-test-downwardapi-l6ff": Phase="Running", Reason="", readiness=false. Elapsed: 6.017424343s May 2 12:10:42.615: INFO: Pod "pod-subpath-test-downwardapi-l6ff": Phase="Running", Reason="", readiness=false. Elapsed: 8.02138507s May 2 12:10:44.619: INFO: Pod "pod-subpath-test-downwardapi-l6ff": Phase="Running", Reason="", readiness=false. Elapsed: 10.025394559s May 2 12:10:46.624: INFO: Pod "pod-subpath-test-downwardapi-l6ff": Phase="Running", Reason="", readiness=false. Elapsed: 12.030008469s May 2 12:10:48.628: INFO: Pod "pod-subpath-test-downwardapi-l6ff": Phase="Running", Reason="", readiness=false. Elapsed: 14.034578304s May 2 12:10:50.633: INFO: Pod "pod-subpath-test-downwardapi-l6ff": Phase="Running", Reason="", readiness=false. Elapsed: 16.03950615s May 2 12:10:52.638: INFO: Pod "pod-subpath-test-downwardapi-l6ff": Phase="Running", Reason="", readiness=false. Elapsed: 18.044182388s May 2 12:10:54.643: INFO: Pod "pod-subpath-test-downwardapi-l6ff": Phase="Running", Reason="", readiness=false. Elapsed: 20.048943617s May 2 12:10:56.647: INFO: Pod "pod-subpath-test-downwardapi-l6ff": Phase="Running", Reason="", readiness=false. Elapsed: 22.053656417s May 2 12:10:58.651: INFO: Pod "pod-subpath-test-downwardapi-l6ff": Phase="Running", Reason="", readiness=false. Elapsed: 24.057608241s May 2 12:11:00.656: INFO: Pod "pod-subpath-test-downwardapi-l6ff": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.062210585s STEP: Saw pod success May 2 12:11:00.656: INFO: Pod "pod-subpath-test-downwardapi-l6ff" satisfied condition "success or failure" May 2 12:11:00.659: INFO: Trying to get logs from node hunter-worker pod pod-subpath-test-downwardapi-l6ff container test-container-subpath-downwardapi-l6ff: STEP: delete the pod May 2 12:11:00.716: INFO: Waiting for pod pod-subpath-test-downwardapi-l6ff to disappear May 2 12:11:00.725: INFO: Pod pod-subpath-test-downwardapi-l6ff no longer exists STEP: Deleting pod pod-subpath-test-downwardapi-l6ff May 2 12:11:00.725: INFO: Deleting pod "pod-subpath-test-downwardapi-l6ff" in namespace "e2e-tests-subpath-bw2hj" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 2 12:11:00.727: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-subpath-bw2hj" for this suite. May 2 12:11:06.741: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 12:11:06.800: INFO: namespace: e2e-tests-subpath-bw2hj, resource: bindings, ignored listing per whitelist May 2 12:11:06.821: INFO: namespace e2e-tests-subpath-bw2hj deletion completed in 6.091229177s • [SLOW TEST:32.369 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with downward pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 2 12:11:06.822: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating the pod May 2 12:11:11.488: INFO: Successfully updated pod "annotationupdate0064f0a1-8c6e-11ea-8045-0242ac110017" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 2 12:11:13.518: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-8k2bf" for this suite. May 2 12:11:35.541: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 12:11:35.586: INFO: namespace: e2e-tests-projected-8k2bf, resource: bindings, ignored listing per whitelist May 2 12:11:35.622: INFO: namespace e2e-tests-projected-8k2bf deletion completed in 22.10016832s • [SLOW TEST:28.801 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 2 12:11:35.623: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 2 12:11:35.685: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 2 12:11:39.748: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-vlbxj" for this suite. May 2 12:12:19.766: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 12:12:19.814: INFO: namespace: e2e-tests-pods-vlbxj, resource: bindings, ignored listing per whitelist May 2 12:12:19.846: INFO: namespace e2e-tests-pods-vlbxj deletion completed in 40.094301305s • [SLOW TEST:44.224 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 2 12:12:19.847: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-2beb0920-8c6e-11ea-8045-0242ac110017 STEP: Creating a pod to test consume secrets May 2 12:12:19.991: INFO: Waiting up to 5m0s for pod "pod-secrets-2bf21959-8c6e-11ea-8045-0242ac110017" in namespace "e2e-tests-secrets-2htsj" to be "success or failure" May 2 12:12:19.993: INFO: Pod "pod-secrets-2bf21959-8c6e-11ea-8045-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.348848ms May 2 12:12:21.998: INFO: Pod "pod-secrets-2bf21959-8c6e-11ea-8045-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007084659s May 2 12:12:24.003: INFO: Pod "pod-secrets-2bf21959-8c6e-11ea-8045-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011689981s STEP: Saw pod success May 2 12:12:24.003: INFO: Pod "pod-secrets-2bf21959-8c6e-11ea-8045-0242ac110017" satisfied condition "success or failure" May 2 12:12:24.006: INFO: Trying to get logs from node hunter-worker2 pod pod-secrets-2bf21959-8c6e-11ea-8045-0242ac110017 container secret-volume-test: STEP: delete the pod May 2 12:12:24.028: INFO: Waiting for pod pod-secrets-2bf21959-8c6e-11ea-8045-0242ac110017 to disappear May 2 12:12:24.032: INFO: Pod pod-secrets-2bf21959-8c6e-11ea-8045-0242ac110017 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 2 12:12:24.032: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-2htsj" for this suite. May 2 12:12:30.047: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 12:12:30.060: INFO: namespace: e2e-tests-secrets-2htsj, resource: bindings, ignored listing per whitelist May 2 12:12:30.124: INFO: namespace e2e-tests-secrets-2htsj deletion completed in 6.088075152s • [SLOW TEST:10.277 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 2 12:12:30.124: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-320c3d60-8c6e-11ea-8045-0242ac110017 STEP: Creating a pod to test consume secrets May 2 12:12:30.250: INFO: Waiting up to 5m0s for pod "pod-secrets-32102f99-8c6e-11ea-8045-0242ac110017" in namespace "e2e-tests-secrets-nt2dm" to be "success or failure" May 2 12:12:30.288: INFO: Pod "pod-secrets-32102f99-8c6e-11ea-8045-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 37.95632ms May 2 12:12:32.357: INFO: Pod "pod-secrets-32102f99-8c6e-11ea-8045-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.107632988s May 2 12:12:34.361: INFO: Pod "pod-secrets-32102f99-8c6e-11ea-8045-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.111836969s STEP: Saw pod success May 2 12:12:34.361: INFO: Pod "pod-secrets-32102f99-8c6e-11ea-8045-0242ac110017" satisfied condition "success or failure" May 2 12:12:34.364: INFO: Trying to get logs from node hunter-worker2 pod pod-secrets-32102f99-8c6e-11ea-8045-0242ac110017 container secret-volume-test: STEP: delete the pod May 2 12:12:34.415: INFO: Waiting for pod pod-secrets-32102f99-8c6e-11ea-8045-0242ac110017 to disappear May 2 12:12:34.422: INFO: Pod pod-secrets-32102f99-8c6e-11ea-8045-0242ac110017 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 2 12:12:34.422: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-nt2dm" for this suite. May 2 12:12:40.442: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 12:12:40.468: INFO: namespace: e2e-tests-secrets-nt2dm, resource: bindings, ignored listing per whitelist May 2 12:12:40.513: INFO: namespace e2e-tests-secrets-nt2dm deletion completed in 6.079063843s • [SLOW TEST:10.389 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 2 12:12:40.513: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65 [It] RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 2 12:12:40.604: INFO: Creating deployment "test-recreate-deployment" May 2 12:12:40.675: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1 May 2 12:12:40.680: INFO: new replicaset for deployment "test-recreate-deployment" is yet to be created May 2 12:12:42.686: INFO: Waiting deployment "test-recreate-deployment" to complete May 2 12:12:42.689: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724018360, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724018360, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724018360, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724018360, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)} May 2 12:12:44.693: INFO: Triggering a new rollout for deployment "test-recreate-deployment" May 2 12:12:44.700: INFO: Updating deployment test-recreate-deployment May 2 12:12:44.700: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59 May 2 12:12:44.986: INFO: Deployment "test-recreate-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment,GenerateName:,Namespace:e2e-tests-deployment-gqjtr,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-gqjtr/deployments/test-recreate-deployment,UID:383d560e-8c6e-11ea-99e8-0242ac110002,ResourceVersion:8349322,Generation:2,CreationTimestamp:2020-05-02 12:12:40 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[{Available False 2020-05-02 12:12:44 +0000 UTC 2020-05-02 12:12:44 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2020-05-02 12:12:44 +0000 UTC 2020-05-02 12:12:40 +0000 UTC ReplicaSetUpdated ReplicaSet "test-recreate-deployment-589c4bfd" is progressing.}],ReadyReplicas:0,CollisionCount:nil,},} May 2 12:12:44.990: INFO: New ReplicaSet "test-recreate-deployment-589c4bfd" of Deployment "test-recreate-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-589c4bfd,GenerateName:,Namespace:e2e-tests-deployment-gqjtr,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-gqjtr/replicasets/test-recreate-deployment-589c4bfd,UID:3ac5a59e-8c6e-11ea-99e8-0242ac110002,ResourceVersion:8349320,Generation:1,CreationTimestamp:2020-05-02 12:12:44 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment 383d560e-8c6e-11ea-99e8-0242ac110002 0xc001e7297f 0xc001e72990}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} May 2 12:12:44.990: INFO: All old ReplicaSets of Deployment "test-recreate-deployment": May 2 12:12:44.990: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5bf7f65dc,GenerateName:,Namespace:e2e-tests-deployment-gqjtr,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-gqjtr/replicasets/test-recreate-deployment-5bf7f65dc,UID:3848c62c-8c6e-11ea-99e8-0242ac110002,ResourceVersion:8349310,Generation:2,CreationTimestamp:2020-05-02 12:12:40 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5bf7f65dc,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment 383d560e-8c6e-11ea-99e8-0242ac110002 0xc001e72a80 0xc001e72a81}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5bf7f65dc,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5bf7f65dc,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} May 2 12:12:44.994: INFO: Pod "test-recreate-deployment-589c4bfd-85chb" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-589c4bfd-85chb,GenerateName:test-recreate-deployment-589c4bfd-,Namespace:e2e-tests-deployment-gqjtr,SelfLink:/api/v1/namespaces/e2e-tests-deployment-gqjtr/pods/test-recreate-deployment-589c4bfd-85chb,UID:3ac68918-8c6e-11ea-99e8-0242ac110002,ResourceVersion:8349321,Generation:0,CreationTimestamp:2020-05-02 12:12:44 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-recreate-deployment-589c4bfd 3ac5a59e-8c6e-11ea-99e8-0242ac110002 0xc00240860f 0xc002408620}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-97lhn {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-97lhn,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-97lhn true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002408690} {node.kubernetes.io/unreachable Exists NoExecute 0xc0024086b0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-02 12:12:44 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-02 12:12:44 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-02 12:12:44 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-02 12:12:44 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.3,PodIP:,StartTime:2020-05-02 12:12:44 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 2 12:12:44.994: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-deployment-gqjtr" for this suite. May 2 12:12:51.012: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 12:12:51.066: INFO: namespace: e2e-tests-deployment-gqjtr, resource: bindings, ignored listing per whitelist May 2 12:12:51.092: INFO: namespace e2e-tests-deployment-gqjtr deletion completed in 6.094622799s • [SLOW TEST:10.578 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run --rm job should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 2 12:12:51.092: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: executing a command with run --rm and attach with stdin May 2 12:12:51.215: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=e2e-tests-kubectl-6s5bs run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed'' May 2 12:12:54.330: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\nI0502 12:12:54.262084 3150 log.go:172] (0xc0001389a0) (0xc0000f14a0) Create stream\nI0502 12:12:54.262171 3150 log.go:172] (0xc0001389a0) (0xc0000f14a0) Stream added, broadcasting: 1\nI0502 12:12:54.264995 3150 log.go:172] (0xc0001389a0) Reply frame received for 1\nI0502 12:12:54.265060 3150 log.go:172] (0xc0001389a0) (0xc000508640) Create stream\nI0502 12:12:54.265077 3150 log.go:172] (0xc0001389a0) (0xc000508640) Stream added, broadcasting: 3\nI0502 12:12:54.266462 3150 log.go:172] (0xc0001389a0) Reply frame received for 3\nI0502 12:12:54.266517 3150 log.go:172] (0xc0001389a0) (0xc000664000) Create stream\nI0502 12:12:54.266532 3150 log.go:172] (0xc0001389a0) (0xc000664000) Stream added, broadcasting: 5\nI0502 12:12:54.267462 3150 log.go:172] (0xc0001389a0) Reply frame received for 5\nI0502 12:12:54.267510 3150 log.go:172] (0xc0001389a0) (0xc0000f1540) Create stream\nI0502 12:12:54.267533 3150 log.go:172] (0xc0001389a0) (0xc0000f1540) Stream added, broadcasting: 7\nI0502 12:12:54.268367 3150 log.go:172] (0xc0001389a0) Reply frame received for 7\nI0502 12:12:54.268578 3150 log.go:172] (0xc000508640) (3) Writing data frame\nI0502 12:12:54.268673 3150 log.go:172] (0xc000508640) (3) Writing data frame\nI0502 12:12:54.269774 3150 log.go:172] (0xc0001389a0) Data frame received for 5\nI0502 12:12:54.269800 3150 log.go:172] (0xc000664000) (5) Data frame handling\nI0502 12:12:54.269827 3150 log.go:172] (0xc000664000) (5) Data frame sent\nI0502 12:12:54.270486 3150 log.go:172] (0xc0001389a0) Data frame received for 5\nI0502 12:12:54.270512 3150 log.go:172] (0xc000664000) (5) Data frame handling\nI0502 12:12:54.270536 3150 log.go:172] (0xc000664000) (5) Data frame sent\nI0502 12:12:54.303899 3150 log.go:172] (0xc0001389a0) Data frame received for 7\nI0502 12:12:54.303931 3150 log.go:172] (0xc0000f1540) (7) Data frame handling\nI0502 12:12:54.303946 3150 log.go:172] (0xc0001389a0) Data frame received for 5\nI0502 12:12:54.303953 3150 log.go:172] (0xc000664000) (5) Data frame handling\nI0502 12:12:54.304429 3150 log.go:172] (0xc0001389a0) Data frame received for 1\nI0502 12:12:54.304463 3150 log.go:172] (0xc0000f14a0) (1) Data frame handling\nI0502 12:12:54.304508 3150 log.go:172] (0xc0000f14a0) (1) Data frame sent\nI0502 12:12:54.304573 3150 log.go:172] (0xc0001389a0) (0xc0000f14a0) Stream removed, broadcasting: 1\nI0502 12:12:54.304630 3150 log.go:172] (0xc0001389a0) (0xc000508640) Stream removed, broadcasting: 3\nI0502 12:12:54.304682 3150 log.go:172] (0xc0001389a0) Go away received\nI0502 12:12:54.304799 3150 log.go:172] (0xc0001389a0) (0xc0000f14a0) Stream removed, broadcasting: 1\nI0502 12:12:54.304832 3150 log.go:172] (0xc0001389a0) (0xc000508640) Stream removed, broadcasting: 3\nI0502 12:12:54.304844 3150 log.go:172] (0xc0001389a0) (0xc000664000) Stream removed, broadcasting: 5\nI0502 12:12:54.304856 3150 log.go:172] (0xc0001389a0) (0xc0000f1540) Stream removed, broadcasting: 7\n" May 2 12:12:54.331: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n" STEP: verifying the job e2e-test-rm-busybox-job was deleted [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 2 12:12:56.338: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-6s5bs" for this suite. May 2 12:13:02.353: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 12:13:02.381: INFO: namespace: e2e-tests-kubectl-6s5bs, resource: bindings, ignored listing per whitelist May 2 12:13:02.444: INFO: namespace e2e-tests-kubectl-6s5bs deletion completed in 6.102586105s • [SLOW TEST:11.352 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run --rm job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 2 12:13:02.444: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward api env vars May 2 12:13:02.620: INFO: Waiting up to 5m0s for pod "downward-api-4558c98e-8c6e-11ea-8045-0242ac110017" in namespace "e2e-tests-downward-api-4t7dt" to be "success or failure" May 2 12:13:02.638: INFO: Pod "downward-api-4558c98e-8c6e-11ea-8045-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 18.224844ms May 2 12:13:04.656: INFO: Pod "downward-api-4558c98e-8c6e-11ea-8045-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.036237048s May 2 12:13:06.660: INFO: Pod "downward-api-4558c98e-8c6e-11ea-8045-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.040193116s STEP: Saw pod success May 2 12:13:06.660: INFO: Pod "downward-api-4558c98e-8c6e-11ea-8045-0242ac110017" satisfied condition "success or failure" May 2 12:13:06.663: INFO: Trying to get logs from node hunter-worker pod downward-api-4558c98e-8c6e-11ea-8045-0242ac110017 container dapi-container: STEP: delete the pod May 2 12:13:06.681: INFO: Waiting for pod downward-api-4558c98e-8c6e-11ea-8045-0242ac110017 to disappear May 2 12:13:06.686: INFO: Pod downward-api-4558c98e-8c6e-11ea-8045-0242ac110017 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 2 12:13:06.686: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-4t7dt" for this suite. May 2 12:13:12.701: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 12:13:12.714: INFO: namespace: e2e-tests-downward-api-4t7dt, resource: bindings, ignored listing per whitelist May 2 12:13:12.841: INFO: namespace e2e-tests-downward-api-4t7dt deletion completed in 6.152270158s • [SLOW TEST:10.398 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38 should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 2 12:13:12.842: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name cm-test-opt-del-4b7fe636-8c6e-11ea-8045-0242ac110017 STEP: Creating configMap with name cm-test-opt-upd-4b7fe686-8c6e-11ea-8045-0242ac110017 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-4b7fe636-8c6e-11ea-8045-0242ac110017 STEP: Updating configmap cm-test-opt-upd-4b7fe686-8c6e-11ea-8045-0242ac110017 STEP: Creating configMap with name cm-test-opt-create-4b7fe6a7-8c6e-11ea-8045-0242ac110017 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 2 12:14:27.569: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-5pcw8" for this suite. May 2 12:14:51.584: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 12:14:51.616: INFO: namespace: e2e-tests-projected-5pcw8, resource: bindings, ignored listing per whitelist May 2 12:14:51.660: INFO: namespace e2e-tests-projected-5pcw8 deletion completed in 24.088363553s • [SLOW TEST:98.819 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 2 12:14:51.661: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with configmap pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod pod-subpath-test-configmap-7l88 STEP: Creating a pod to test atomic-volume-subpath May 2 12:14:51.797: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-7l88" in namespace "e2e-tests-subpath-67hkh" to be "success or failure" May 2 12:14:51.825: INFO: Pod "pod-subpath-test-configmap-7l88": Phase="Pending", Reason="", readiness=false. Elapsed: 27.820339ms May 2 12:14:53.829: INFO: Pod "pod-subpath-test-configmap-7l88": Phase="Pending", Reason="", readiness=false. Elapsed: 2.0317295s May 2 12:14:56.551: INFO: Pod "pod-subpath-test-configmap-7l88": Phase="Pending", Reason="", readiness=false. Elapsed: 4.754169182s May 2 12:14:58.587: INFO: Pod "pod-subpath-test-configmap-7l88": Phase="Running", Reason="", readiness=true. Elapsed: 6.789565055s May 2 12:15:00.592: INFO: Pod "pod-subpath-test-configmap-7l88": Phase="Running", Reason="", readiness=false. Elapsed: 8.794559552s May 2 12:15:02.596: INFO: Pod "pod-subpath-test-configmap-7l88": Phase="Running", Reason="", readiness=false. Elapsed: 10.799034516s May 2 12:15:04.601: INFO: Pod "pod-subpath-test-configmap-7l88": Phase="Running", Reason="", readiness=false. Elapsed: 12.803825755s May 2 12:15:06.605: INFO: Pod "pod-subpath-test-configmap-7l88": Phase="Running", Reason="", readiness=false. Elapsed: 14.808191464s May 2 12:15:08.610: INFO: Pod "pod-subpath-test-configmap-7l88": Phase="Running", Reason="", readiness=false. Elapsed: 16.812523865s May 2 12:15:10.614: INFO: Pod "pod-subpath-test-configmap-7l88": Phase="Running", Reason="", readiness=false. Elapsed: 18.816302529s May 2 12:15:12.618: INFO: Pod "pod-subpath-test-configmap-7l88": Phase="Running", Reason="", readiness=false. Elapsed: 20.820404037s May 2 12:15:14.622: INFO: Pod "pod-subpath-test-configmap-7l88": Phase="Running", Reason="", readiness=false. Elapsed: 22.824632179s May 2 12:15:16.647: INFO: Pod "pod-subpath-test-configmap-7l88": Phase="Running", Reason="", readiness=false. Elapsed: 24.849753894s May 2 12:15:18.651: INFO: Pod "pod-subpath-test-configmap-7l88": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.853756074s STEP: Saw pod success May 2 12:15:18.651: INFO: Pod "pod-subpath-test-configmap-7l88" satisfied condition "success or failure" May 2 12:15:18.654: INFO: Trying to get logs from node hunter-worker pod pod-subpath-test-configmap-7l88 container test-container-subpath-configmap-7l88: STEP: delete the pod May 2 12:15:18.707: INFO: Waiting for pod pod-subpath-test-configmap-7l88 to disappear May 2 12:15:18.749: INFO: Pod pod-subpath-test-configmap-7l88 no longer exists STEP: Deleting pod pod-subpath-test-configmap-7l88 May 2 12:15:18.749: INFO: Deleting pod "pod-subpath-test-configmap-7l88" in namespace "e2e-tests-subpath-67hkh" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 2 12:15:18.752: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-subpath-67hkh" for this suite. May 2 12:15:24.764: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 12:15:24.838: INFO: namespace: e2e-tests-subpath-67hkh, resource: bindings, ignored listing per whitelist May 2 12:15:24.844: INFO: namespace e2e-tests-subpath-67hkh deletion completed in 6.088928875s • [SLOW TEST:33.183 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with configmap pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [sig-network] Services should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 2 12:15:24.844: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:85 [It] should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating service endpoint-test2 in namespace e2e-tests-services-x8xn7 STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-x8xn7 to expose endpoints map[] May 2 12:15:24.998: INFO: Get endpoints failed (20.04865ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found May 2 12:15:26.002: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-x8xn7 exposes endpoints map[] (1.024161154s elapsed) STEP: Creating pod pod1 in namespace e2e-tests-services-x8xn7 STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-x8xn7 to expose endpoints map[pod1:[80]] May 2 12:15:29.055: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-x8xn7 exposes endpoints map[pod1:[80]] (3.045115702s elapsed) STEP: Creating pod pod2 in namespace e2e-tests-services-x8xn7 STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-x8xn7 to expose endpoints map[pod1:[80] pod2:[80]] May 2 12:15:32.256: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-x8xn7 exposes endpoints map[pod1:[80] pod2:[80]] (3.197426287s elapsed) STEP: Deleting pod pod1 in namespace e2e-tests-services-x8xn7 STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-x8xn7 to expose endpoints map[pod2:[80]] May 2 12:15:33.308: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-x8xn7 exposes endpoints map[pod2:[80]] (1.047860152s elapsed) STEP: Deleting pod pod2 in namespace e2e-tests-services-x8xn7 STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-x8xn7 to expose endpoints map[] May 2 12:15:34.333: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-x8xn7 exposes endpoints map[] (1.021701712s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 2 12:15:34.508: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-services-x8xn7" for this suite. May 2 12:15:56.605: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 12:15:56.630: INFO: namespace: e2e-tests-services-x8xn7, resource: bindings, ignored listing per whitelist May 2 12:15:56.695: INFO: namespace e2e-tests-services-x8xn7 deletion completed in 22.15829263s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:90 • [SLOW TEST:31.852 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 2 12:15:56.696: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-fz2xm May 2 12:16:00.848: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-fz2xm STEP: checking the pod's current state and verifying that restartCount is present May 2 12:16:00.851: INFO: Initial restart count of pod liveness-http is 0 May 2 12:16:20.911: INFO: Restart count of pod e2e-tests-container-probe-fz2xm/liveness-http is now 1 (20.05945586s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 2 12:16:20.948: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-fz2xm" for this suite. May 2 12:16:26.965: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 12:16:26.989: INFO: namespace: e2e-tests-container-probe-fz2xm, resource: bindings, ignored listing per whitelist May 2 12:16:27.044: INFO: namespace e2e-tests-container-probe-fz2xm deletion completed in 6.088992312s • [SLOW TEST:30.349 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 2 12:16:27.045: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-map-bf4abb13-8c6e-11ea-8045-0242ac110017 STEP: Creating a pod to test consume configMaps May 2 12:16:27.202: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-bf4b425c-8c6e-11ea-8045-0242ac110017" in namespace "e2e-tests-projected-fssvl" to be "success or failure" May 2 12:16:27.207: INFO: Pod "pod-projected-configmaps-bf4b425c-8c6e-11ea-8045-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 5.205943ms May 2 12:16:29.212: INFO: Pod "pod-projected-configmaps-bf4b425c-8c6e-11ea-8045-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009935096s May 2 12:16:31.215: INFO: Pod "pod-projected-configmaps-bf4b425c-8c6e-11ea-8045-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013537328s STEP: Saw pod success May 2 12:16:31.216: INFO: Pod "pod-projected-configmaps-bf4b425c-8c6e-11ea-8045-0242ac110017" satisfied condition "success or failure" May 2 12:16:31.218: INFO: Trying to get logs from node hunter-worker2 pod pod-projected-configmaps-bf4b425c-8c6e-11ea-8045-0242ac110017 container projected-configmap-volume-test: STEP: delete the pod May 2 12:16:31.256: INFO: Waiting for pod pod-projected-configmaps-bf4b425c-8c6e-11ea-8045-0242ac110017 to disappear May 2 12:16:31.294: INFO: Pod pod-projected-configmaps-bf4b425c-8c6e-11ea-8045-0242ac110017 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 2 12:16:31.294: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-fssvl" for this suite. May 2 12:16:37.311: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 12:16:37.348: INFO: namespace: e2e-tests-projected-fssvl, resource: bindings, ignored listing per whitelist May 2 12:16:37.386: INFO: namespace e2e-tests-projected-fssvl deletion completed in 6.088439958s • [SLOW TEST:10.342 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 2 12:16:37.387: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 2 12:16:37.500: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 2 12:16:41.641: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-g2rg8" for this suite. May 2 12:17:23.668: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 12:17:23.731: INFO: namespace: e2e-tests-pods-g2rg8, resource: bindings, ignored listing per whitelist May 2 12:17:23.738: INFO: namespace e2e-tests-pods-g2rg8 deletion completed in 42.092214739s • [SLOW TEST:46.352 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 2 12:17:23.739: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook May 2 12:17:31.987: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 2 12:17:31.992: INFO: Pod pod-with-poststart-http-hook still exists May 2 12:17:33.993: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 2 12:17:33.997: INFO: Pod pod-with-poststart-http-hook still exists May 2 12:17:35.992: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 2 12:17:35.996: INFO: Pod pod-with-poststart-http-hook still exists May 2 12:17:37.992: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 2 12:17:37.996: INFO: Pod pod-with-poststart-http-hook still exists May 2 12:17:39.992: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 2 12:17:39.996: INFO: Pod pod-with-poststart-http-hook still exists May 2 12:17:41.992: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 2 12:17:41.997: INFO: Pod pod-with-poststart-http-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 2 12:17:41.997: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-g4ccm" for this suite. May 2 12:18:04.014: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 12:18:04.052: INFO: namespace: e2e-tests-container-lifecycle-hook-g4ccm, resource: bindings, ignored listing per whitelist May 2 12:18:04.092: INFO: namespace e2e-tests-container-lifecycle-hook-g4ccm deletion completed in 22.091618678s • [SLOW TEST:40.354 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40 should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 2 12:18:04.093: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65 [It] deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 2 12:18:04.231: INFO: Creating deployment "nginx-deployment" May 2 12:18:04.235: INFO: Waiting for observed generation 1 May 2 12:18:06.244: INFO: Waiting for all required pods to come up May 2 12:18:06.247: INFO: Pod name nginx: Found 10 pods out of 10 STEP: ensuring each pod is running May 2 12:18:16.253: INFO: Waiting for deployment "nginx-deployment" to complete May 2 12:18:16.258: INFO: Updating deployment "nginx-deployment" with a non-existent image May 2 12:18:16.263: INFO: Updating deployment nginx-deployment May 2 12:18:16.263: INFO: Waiting for observed generation 2 May 2 12:18:18.284: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 May 2 12:18:18.287: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 May 2 12:18:18.289: INFO: Waiting for the first rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas May 2 12:18:18.298: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 May 2 12:18:18.298: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 May 2 12:18:18.300: INFO: Waiting for the second rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas May 2 12:18:18.304: INFO: Verifying that deployment "nginx-deployment" has minimum required number of available replicas May 2 12:18:18.304: INFO: Scaling up the deployment "nginx-deployment" from 10 to 30 May 2 12:18:18.309: INFO: Updating deployment nginx-deployment May 2 12:18:18.309: INFO: Waiting for the replicasets of deployment "nginx-deployment" to have desired number of replicas May 2 12:18:18.353: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 May 2 12:18:18.359: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59 May 2 12:18:18.421: INFO: Deployment "nginx-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment,GenerateName:,Namespace:e2e-tests-deployment-gfx8m,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-gfx8m/deployments/nginx-deployment,UID:f922f166-8c6e-11ea-99e8-0242ac110002,ResourceVersion:8350492,Generation:3,CreationTimestamp:2020-05-02 12:18:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*30,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:13,UpdatedReplicas:5,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[{Progressing True 2020-05-02 12:18:16 +0000 UTC 2020-05-02 12:18:04 +0000 UTC ReplicaSetUpdated ReplicaSet "nginx-deployment-5c98f8fb5" is progressing.} {Available False 2020-05-02 12:18:18 +0000 UTC 2020-05-02 12:18:18 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.}],ReadyReplicas:8,CollisionCount:nil,},} May 2 12:18:18.472: INFO: New ReplicaSet "nginx-deployment-5c98f8fb5" of Deployment "nginx-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5,GenerateName:,Namespace:e2e-tests-deployment-gfx8m,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-gfx8m/replicasets/nginx-deployment-5c98f8fb5,UID:004ee5f9-8c6f-11ea-99e8-0242ac110002,ResourceVersion:8350480,Generation:3,CreationTimestamp:2020-05-02 12:18:16 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment nginx-deployment f922f166-8c6e-11ea-99e8-0242ac110002 0xc0019054e7 0xc0019054e8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*13,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:5,FullyLabeledReplicas:5,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} May 2 12:18:18.472: INFO: All old ReplicaSets of Deployment "nginx-deployment": May 2 12:18:18.472: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d,GenerateName:,Namespace:e2e-tests-deployment-gfx8m,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-gfx8m/replicasets/nginx-deployment-85ddf47c5d,UID:f9247ab3-8c6e-11ea-99e8-0242ac110002,ResourceVersion:8350478,Generation:3,CreationTimestamp:2020-05-02 12:18:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment nginx-deployment f922f166-8c6e-11ea-99e8-0242ac110002 0xc0019055a7 0xc0019055a8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*20,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:8,FullyLabeledReplicas:8,ObservedGeneration:2,ReadyReplicas:8,AvailableReplicas:8,Conditions:[],},} May 2 12:18:18.604: INFO: Pod "nginx-deployment-5c98f8fb5-2d8ls" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-2d8ls,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-gfx8m,SelfLink:/api/v1/namespaces/e2e-tests-deployment-gfx8m/pods/nginx-deployment-5c98f8fb5-2d8ls,UID:0050fbff-8c6f-11ea-99e8-0242ac110002,ResourceVersion:8350453,Generation:0,CreationTimestamp:2020-05-02 12:18:16 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 004ee5f9-8c6f-11ea-99e8-0242ac110002 0xc0012fd377 0xc0012fd378}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-66zj8 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-66zj8,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-66zj8 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0012fd4d0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0012fd4f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-02 12:18:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-02 12:18:16 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-02 12:18:16 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-02 12:18:16 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.4,PodIP:,StartTime:2020-05-02 12:18:16 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 2 12:18:18.604: INFO: Pod "nginx-deployment-5c98f8fb5-2x58c" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-2x58c,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-gfx8m,SelfLink:/api/v1/namespaces/e2e-tests-deployment-gfx8m/pods/nginx-deployment-5c98f8fb5-2x58c,UID:006b0ce1-8c6f-11ea-99e8-0242ac110002,ResourceVersion:8350468,Generation:0,CreationTimestamp:2020-05-02 12:18:16 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 004ee5f9-8c6f-11ea-99e8-0242ac110002 0xc0012fd630 0xc0012fd631}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-66zj8 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-66zj8,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-66zj8 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0012fd6b0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0012fd6d0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-02 12:18:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-02 12:18:16 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-02 12:18:16 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-02 12:18:16 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.4,PodIP:,StartTime:2020-05-02 12:18:16 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 2 12:18:18.604: INFO: Pod "nginx-deployment-5c98f8fb5-66s6s" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-66s6s,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-gfx8m,SelfLink:/api/v1/namespaces/e2e-tests-deployment-gfx8m/pods/nginx-deployment-5c98f8fb5-66s6s,UID:0197c0f4-8c6f-11ea-99e8-0242ac110002,ResourceVersion:8350521,Generation:0,CreationTimestamp:2020-05-02 12:18:18 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 004ee5f9-8c6f-11ea-99e8-0242ac110002 0xc0012fd820 0xc0012fd821}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-66zj8 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-66zj8,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-66zj8 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0012fd900} {node.kubernetes.io/unreachable Exists NoExecute 0xc0012fd920}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-02 12:18:18 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 2 12:18:18.604: INFO: Pod "nginx-deployment-5c98f8fb5-9n68r" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-9n68r,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-gfx8m,SelfLink:/api/v1/namespaces/e2e-tests-deployment-gfx8m/pods/nginx-deployment-5c98f8fb5-9n68r,UID:00500100-8c6f-11ea-99e8-0242ac110002,ResourceVersion:8350447,Generation:0,CreationTimestamp:2020-05-02 12:18:16 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 004ee5f9-8c6f-11ea-99e8-0242ac110002 0xc0012fd997 0xc0012fd998}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-66zj8 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-66zj8,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-66zj8 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0012fdb50} {node.kubernetes.io/unreachable Exists NoExecute 0xc0012fdb70}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-02 12:18:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-02 12:18:16 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-02 12:18:16 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-02 12:18:16 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.3,PodIP:,StartTime:2020-05-02 12:18:16 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 2 12:18:18.604: INFO: Pod "nginx-deployment-5c98f8fb5-dc5z2" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-dc5z2,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-gfx8m,SelfLink:/api/v1/namespaces/e2e-tests-deployment-gfx8m/pods/nginx-deployment-5c98f8fb5-dc5z2,UID:018ef140-8c6f-11ea-99e8-0242ac110002,ResourceVersion:8350494,Generation:0,CreationTimestamp:2020-05-02 12:18:18 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 004ee5f9-8c6f-11ea-99e8-0242ac110002 0xc0012fdc30 0xc0012fdc31}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-66zj8 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-66zj8,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-66zj8 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0012fdd40} {node.kubernetes.io/unreachable Exists NoExecute 0xc0012fdd60}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-02 12:18:18 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 2 12:18:18.604: INFO: Pod "nginx-deployment-5c98f8fb5-j6l9g" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-j6l9g,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-gfx8m,SelfLink:/api/v1/namespaces/e2e-tests-deployment-gfx8m/pods/nginx-deployment-5c98f8fb5-j6l9g,UID:019ba144-8c6f-11ea-99e8-0242ac110002,ResourceVersion:8350532,Generation:0,CreationTimestamp:2020-05-02 12:18:18 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 004ee5f9-8c6f-11ea-99e8-0242ac110002 0xc0012fddd7 0xc0012fddd8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-66zj8 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-66zj8,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-66zj8 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0012fde50} {node.kubernetes.io/unreachable Exists NoExecute 0xc0012fdee0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-02 12:18:18 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 2 12:18:18.605: INFO: Pod "nginx-deployment-5c98f8fb5-n2wqm" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-n2wqm,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-gfx8m,SelfLink:/api/v1/namespaces/e2e-tests-deployment-gfx8m/pods/nginx-deployment-5c98f8fb5-n2wqm,UID:018f01cb-8c6f-11ea-99e8-0242ac110002,ResourceVersion:8350504,Generation:0,CreationTimestamp:2020-05-02 12:18:18 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 004ee5f9-8c6f-11ea-99e8-0242ac110002 0xc0012fdf57 0xc0012fdf58}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-66zj8 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-66zj8,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-66zj8 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0012fdfd0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0012fdff0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-02 12:18:18 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 2 12:18:18.605: INFO: Pod "nginx-deployment-5c98f8fb5-p2ltx" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-p2ltx,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-gfx8m,SelfLink:/api/v1/namespaces/e2e-tests-deployment-gfx8m/pods/nginx-deployment-5c98f8fb5-p2ltx,UID:00639357-8c6f-11ea-99e8-0242ac110002,ResourceVersion:8350466,Generation:0,CreationTimestamp:2020-05-02 12:18:16 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 004ee5f9-8c6f-11ea-99e8-0242ac110002 0xc001a30097 0xc001a30098}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-66zj8 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-66zj8,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-66zj8 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001a30510} {node.kubernetes.io/unreachable Exists NoExecute 0xc001a30530}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-02 12:18:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-02 12:18:16 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-02 12:18:16 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-02 12:18:16 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.4,PodIP:,StartTime:2020-05-02 12:18:16 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 2 12:18:18.605: INFO: Pod "nginx-deployment-5c98f8fb5-p999t" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-p999t,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-gfx8m,SelfLink:/api/v1/namespaces/e2e-tests-deployment-gfx8m/pods/nginx-deployment-5c98f8fb5-p999t,UID:005109ca-8c6f-11ea-99e8-0242ac110002,ResourceVersion:8350462,Generation:0,CreationTimestamp:2020-05-02 12:18:16 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 004ee5f9-8c6f-11ea-99e8-0242ac110002 0xc001a305f0 0xc001a305f1}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-66zj8 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-66zj8,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-66zj8 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001a30670} {node.kubernetes.io/unreachable Exists NoExecute 0xc001a306d0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-02 12:18:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-02 12:18:16 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-02 12:18:16 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-02 12:18:16 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.3,PodIP:,StartTime:2020-05-02 12:18:16 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 2 12:18:18.605: INFO: Pod "nginx-deployment-5c98f8fb5-rztcj" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-rztcj,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-gfx8m,SelfLink:/api/v1/namespaces/e2e-tests-deployment-gfx8m/pods/nginx-deployment-5c98f8fb5-rztcj,UID:018dc5e7-8c6f-11ea-99e8-0242ac110002,ResourceVersion:8350487,Generation:0,CreationTimestamp:2020-05-02 12:18:18 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 004ee5f9-8c6f-11ea-99e8-0242ac110002 0xc001a307b0 0xc001a307b1}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-66zj8 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-66zj8,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-66zj8 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001a308e0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001a30900}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-02 12:18:18 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 2 12:18:18.605: INFO: Pod "nginx-deployment-5c98f8fb5-tb7mc" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-tb7mc,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-gfx8m,SelfLink:/api/v1/namespaces/e2e-tests-deployment-gfx8m/pods/nginx-deployment-5c98f8fb5-tb7mc,UID:0197a841-8c6f-11ea-99e8-0242ac110002,ResourceVersion:8350507,Generation:0,CreationTimestamp:2020-05-02 12:18:18 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 004ee5f9-8c6f-11ea-99e8-0242ac110002 0xc001a30977 0xc001a30978}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-66zj8 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-66zj8,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-66zj8 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001a30a00} {node.kubernetes.io/unreachable Exists NoExecute 0xc001a30a20}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-02 12:18:18 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 2 12:18:18.606: INFO: Pod "nginx-deployment-5c98f8fb5-xrjcp" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-xrjcp,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-gfx8m,SelfLink:/api/v1/namespaces/e2e-tests-deployment-gfx8m/pods/nginx-deployment-5c98f8fb5-xrjcp,UID:0197c561-8c6f-11ea-99e8-0242ac110002,ResourceVersion:8350518,Generation:0,CreationTimestamp:2020-05-02 12:18:18 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 004ee5f9-8c6f-11ea-99e8-0242ac110002 0xc001a30b17 0xc001a30b18}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-66zj8 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-66zj8,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-66zj8 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001a30be0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001a314c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-02 12:18:18 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 2 12:18:18.606: INFO: Pod "nginx-deployment-5c98f8fb5-zf9xd" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-zf9xd,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-gfx8m,SelfLink:/api/v1/namespaces/e2e-tests-deployment-gfx8m/pods/nginx-deployment-5c98f8fb5-zf9xd,UID:0197b853-8c6f-11ea-99e8-0242ac110002,ResourceVersion:8350522,Generation:0,CreationTimestamp:2020-05-02 12:18:18 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 004ee5f9-8c6f-11ea-99e8-0242ac110002 0xc001a315d7 0xc001a315d8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-66zj8 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-66zj8,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-66zj8 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001a316c0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001a316e0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-02 12:18:18 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 2 12:18:18.606: INFO: Pod "nginx-deployment-85ddf47c5d-2nmwr" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-2nmwr,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-gfx8m,SelfLink:/api/v1/namespaces/e2e-tests-deployment-gfx8m/pods/nginx-deployment-85ddf47c5d-2nmwr,UID:019b8c02-8c6f-11ea-99e8-0242ac110002,ResourceVersion:8350527,Generation:0,CreationTimestamp:2020-05-02 12:18:18 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d f9247ab3-8c6e-11ea-99e8-0242ac110002 0xc001a31ac7 0xc001a31ac8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-66zj8 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-66zj8,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-66zj8 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001f14620} {node.kubernetes.io/unreachable Exists NoExecute 0xc001f14670}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-02 12:18:18 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 2 12:18:18.606: INFO: Pod "nginx-deployment-85ddf47c5d-4cxv4" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-4cxv4,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-gfx8m,SelfLink:/api/v1/namespaces/e2e-tests-deployment-gfx8m/pods/nginx-deployment-85ddf47c5d-4cxv4,UID:019b9d4b-8c6f-11ea-99e8-0242ac110002,ResourceVersion:8350529,Generation:0,CreationTimestamp:2020-05-02 12:18:18 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d f9247ab3-8c6e-11ea-99e8-0242ac110002 0xc001f14717 0xc001f14718}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-66zj8 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-66zj8,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-66zj8 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001f14800} {node.kubernetes.io/unreachable Exists NoExecute 0xc001f14880}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-02 12:18:18 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 2 12:18:18.606: INFO: Pod "nginx-deployment-85ddf47c5d-6skn5" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-6skn5,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-gfx8m,SelfLink:/api/v1/namespaces/e2e-tests-deployment-gfx8m/pods/nginx-deployment-85ddf47c5d-6skn5,UID:0197bf9a-8c6f-11ea-99e8-0242ac110002,ResourceVersion:8350519,Generation:0,CreationTimestamp:2020-05-02 12:18:18 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d f9247ab3-8c6e-11ea-99e8-0242ac110002 0xc001f14a77 0xc001f14a78}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-66zj8 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-66zj8,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-66zj8 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001f14b50} {node.kubernetes.io/unreachable Exists NoExecute 0xc001f14b70}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-02 12:18:18 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 2 12:18:18.606: INFO: Pod "nginx-deployment-85ddf47c5d-7h229" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-7h229,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-gfx8m,SelfLink:/api/v1/namespaces/e2e-tests-deployment-gfx8m/pods/nginx-deployment-85ddf47c5d-7h229,UID:f92b330f-8c6e-11ea-99e8-0242ac110002,ResourceVersion:8350359,Generation:0,CreationTimestamp:2020-05-02 12:18:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d f9247ab3-8c6e-11ea-99e8-0242ac110002 0xc001f14c37 0xc001f14c38}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-66zj8 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-66zj8,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-66zj8 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001f14df0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001f14e20}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-02 12:18:04 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-02 12:18:10 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-02 12:18:10 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-02 12:18:04 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.3,PodIP:10.244.1.7,StartTime:2020-05-02 12:18:04 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-05-02 12:18:07 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://4241785a8fa6f8bdd941273cee6ec6a449c339ccff6a972ae0435938acf0e488}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 2 12:18:18.606: INFO: Pod "nginx-deployment-85ddf47c5d-7j8kp" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-7j8kp,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-gfx8m,SelfLink:/api/v1/namespaces/e2e-tests-deployment-gfx8m/pods/nginx-deployment-85ddf47c5d-7j8kp,UID:f92eb19f-8c6e-11ea-99e8-0242ac110002,ResourceVersion:8350401,Generation:0,CreationTimestamp:2020-05-02 12:18:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d f9247ab3-8c6e-11ea-99e8-0242ac110002 0xc001f14f67 0xc001f14f68}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-66zj8 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-66zj8,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-66zj8 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001f15140} {node.kubernetes.io/unreachable Exists NoExecute 0xc001f15160}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-02 12:18:04 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-02 12:18:14 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-02 12:18:14 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-02 12:18:04 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.4,PodIP:10.244.2.18,StartTime:2020-05-02 12:18:04 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-05-02 12:18:13 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://304733262a2595aff83d889b3e9f73d61162b688e6e292934e4e9d6a10658869}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 2 12:18:18.607: INFO: Pod "nginx-deployment-85ddf47c5d-8vwkx" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-8vwkx,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-gfx8m,SelfLink:/api/v1/namespaces/e2e-tests-deployment-gfx8m/pods/nginx-deployment-85ddf47c5d-8vwkx,UID:018fbee0-8c6f-11ea-99e8-0242ac110002,ResourceVersion:8350500,Generation:0,CreationTimestamp:2020-05-02 12:18:18 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d f9247ab3-8c6e-11ea-99e8-0242ac110002 0xc001f152a7 0xc001f152a8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-66zj8 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-66zj8,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-66zj8 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001f154b0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001f154d0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-02 12:18:18 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 2 12:18:18.607: INFO: Pod "nginx-deployment-85ddf47c5d-br8g8" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-br8g8,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-gfx8m,SelfLink:/api/v1/namespaces/e2e-tests-deployment-gfx8m/pods/nginx-deployment-85ddf47c5d-br8g8,UID:018bc2cc-8c6f-11ea-99e8-0242ac110002,ResourceVersion:8350486,Generation:0,CreationTimestamp:2020-05-02 12:18:18 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d f9247ab3-8c6e-11ea-99e8-0242ac110002 0xc001f155a7 0xc001f155a8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-66zj8 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-66zj8,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-66zj8 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001f15660} {node.kubernetes.io/unreachable Exists NoExecute 0xc001f157b0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-02 12:18:18 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 2 12:18:18.607: INFO: Pod "nginx-deployment-85ddf47c5d-bvb4b" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-bvb4b,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-gfx8m,SelfLink:/api/v1/namespaces/e2e-tests-deployment-gfx8m/pods/nginx-deployment-85ddf47c5d-bvb4b,UID:f92ead5b-8c6e-11ea-99e8-0242ac110002,ResourceVersion:8350398,Generation:0,CreationTimestamp:2020-05-02 12:18:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d f9247ab3-8c6e-11ea-99e8-0242ac110002 0xc001f15887 0xc001f15888}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-66zj8 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-66zj8,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-66zj8 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001f159a0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001f159c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-02 12:18:04 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-02 12:18:14 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-02 12:18:14 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-02 12:18:04 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.4,PodIP:10.244.2.17,StartTime:2020-05-02 12:18:04 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-05-02 12:18:13 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://2fc34f5f248ebed65f6433d611fb135bff6064aefd5acca9ac24d80ea1ca8388}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 2 12:18:18.607: INFO: Pod "nginx-deployment-85ddf47c5d-cbnrf" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-cbnrf,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-gfx8m,SelfLink:/api/v1/namespaces/e2e-tests-deployment-gfx8m/pods/nginx-deployment-85ddf47c5d-cbnrf,UID:018fb91e-8c6f-11ea-99e8-0242ac110002,ResourceVersion:8350505,Generation:0,CreationTimestamp:2020-05-02 12:18:18 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d f9247ab3-8c6e-11ea-99e8-0242ac110002 0xc001f15c77 0xc001f15c78}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-66zj8 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-66zj8,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-66zj8 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001f15d60} {node.kubernetes.io/unreachable Exists NoExecute 0xc001f15e00}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-02 12:18:18 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 2 12:18:18.607: INFO: Pod "nginx-deployment-85ddf47c5d-db8hk" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-db8hk,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-gfx8m,SelfLink:/api/v1/namespaces/e2e-tests-deployment-gfx8m/pods/nginx-deployment-85ddf47c5d-db8hk,UID:0197a6fa-8c6f-11ea-99e8-0242ac110002,ResourceVersion:8350510,Generation:0,CreationTimestamp:2020-05-02 12:18:18 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d f9247ab3-8c6e-11ea-99e8-0242ac110002 0xc001f15ed7 0xc001f15ed8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-66zj8 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-66zj8,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-66zj8 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001f15fa0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001f30000}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-02 12:18:18 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 2 12:18:18.608: INFO: Pod "nginx-deployment-85ddf47c5d-f9xpd" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-f9xpd,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-gfx8m,SelfLink:/api/v1/namespaces/e2e-tests-deployment-gfx8m/pods/nginx-deployment-85ddf47c5d-f9xpd,UID:019ba01e-8c6f-11ea-99e8-0242ac110002,ResourceVersion:8350528,Generation:0,CreationTimestamp:2020-05-02 12:18:18 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d f9247ab3-8c6e-11ea-99e8-0242ac110002 0xc001f30077 0xc001f30078}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-66zj8 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-66zj8,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-66zj8 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001f300f0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001f30110}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-02 12:18:18 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 2 12:18:18.608: INFO: Pod "nginx-deployment-85ddf47c5d-ffdwp" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-ffdwp,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-gfx8m,SelfLink:/api/v1/namespaces/e2e-tests-deployment-gfx8m/pods/nginx-deployment-85ddf47c5d-ffdwp,UID:0197b691-8c6f-11ea-99e8-0242ac110002,ResourceVersion:8350512,Generation:0,CreationTimestamp:2020-05-02 12:18:18 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d f9247ab3-8c6e-11ea-99e8-0242ac110002 0xc001f30187 0xc001f30188}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-66zj8 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-66zj8,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-66zj8 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001f30200} {node.kubernetes.io/unreachable Exists NoExecute 0xc001f30220}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-02 12:18:18 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 2 12:18:18.608: INFO: Pod "nginx-deployment-85ddf47c5d-fxts7" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-fxts7,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-gfx8m,SelfLink:/api/v1/namespaces/e2e-tests-deployment-gfx8m/pods/nginx-deployment-85ddf47c5d-fxts7,UID:019b501a-8c6f-11ea-99e8-0242ac110002,ResourceVersion:8350524,Generation:0,CreationTimestamp:2020-05-02 12:18:18 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d f9247ab3-8c6e-11ea-99e8-0242ac110002 0xc001f30567 0xc001f30568}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-66zj8 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-66zj8,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-66zj8 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001f31420} {node.kubernetes.io/unreachable Exists NoExecute 0xc001f31440}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-02 12:18:18 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 2 12:18:18.608: INFO: Pod "nginx-deployment-85ddf47c5d-gpvgl" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-gpvgl,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-gfx8m,SelfLink:/api/v1/namespaces/e2e-tests-deployment-gfx8m/pods/nginx-deployment-85ddf47c5d-gpvgl,UID:f92ea425-8c6e-11ea-99e8-0242ac110002,ResourceVersion:8350379,Generation:0,CreationTimestamp:2020-05-02 12:18:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d f9247ab3-8c6e-11ea-99e8-0242ac110002 0xc001f317c7 0xc001f317c8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-66zj8 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-66zj8,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-66zj8 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001f31880} {node.kubernetes.io/unreachable Exists NoExecute 0xc001f318a0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-02 12:18:04 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-02 12:18:11 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-02 12:18:11 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-02 12:18:04 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.4,PodIP:10.244.2.16,StartTime:2020-05-02 12:18:04 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-05-02 12:18:11 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://5c4512ab6fe177008bedb94052bc521fa8fcc1d80bfe06aff6a9562a5a2eff57}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 2 12:18:18.608: INFO: Pod "nginx-deployment-85ddf47c5d-qrz4n" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-qrz4n,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-gfx8m,SelfLink:/api/v1/namespaces/e2e-tests-deployment-gfx8m/pods/nginx-deployment-85ddf47c5d-qrz4n,UID:f92b2f83-8c6e-11ea-99e8-0242ac110002,ResourceVersion:8350382,Generation:0,CreationTimestamp:2020-05-02 12:18:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d f9247ab3-8c6e-11ea-99e8-0242ac110002 0xc001f31a47 0xc001f31a48}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-66zj8 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-66zj8,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-66zj8 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001f31ac0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001f31ba0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-02 12:18:04 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-02 12:18:12 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-02 12:18:12 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-02 12:18:04 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.3,PodIP:10.244.1.8,StartTime:2020-05-02 12:18:04 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-05-02 12:18:11 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://8e53123e7c665dfe5959bcea65b62dbcd3672b921a186a39c4ba6daa791bbec8}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 2 12:18:18.608: INFO: Pod "nginx-deployment-85ddf47c5d-sj4fv" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-sj4fv,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-gfx8m,SelfLink:/api/v1/namespaces/e2e-tests-deployment-gfx8m/pods/nginx-deployment-85ddf47c5d-sj4fv,UID:f937eaa4-8c6e-11ea-99e8-0242ac110002,ResourceVersion:8350412,Generation:0,CreationTimestamp:2020-05-02 12:18:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d f9247ab3-8c6e-11ea-99e8-0242ac110002 0xc001f90087 0xc001f90088}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-66zj8 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-66zj8,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-66zj8 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001f901c0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001f901e0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-02 12:18:04 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-02 12:18:15 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-02 12:18:15 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-02 12:18:04 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.3,PodIP:10.244.1.10,StartTime:2020-05-02 12:18:04 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-05-02 12:18:14 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://513c16c580773baeeaa5d64b25921dbbe83226c8399fab962ac2ecea76e0de6d}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 2 12:18:18.609: INFO: Pod "nginx-deployment-85ddf47c5d-tnkdm" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-tnkdm,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-gfx8m,SelfLink:/api/v1/namespaces/e2e-tests-deployment-gfx8m/pods/nginx-deployment-85ddf47c5d-tnkdm,UID:f92e8820-8c6e-11ea-99e8-0242ac110002,ResourceVersion:8350387,Generation:0,CreationTimestamp:2020-05-02 12:18:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d f9247ab3-8c6e-11ea-99e8-0242ac110002 0xc001f902a7 0xc001f902a8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-66zj8 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-66zj8,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-66zj8 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001f90320} {node.kubernetes.io/unreachable Exists NoExecute 0xc001f90340}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-02 12:18:04 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-02 12:18:13 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-02 12:18:13 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-02 12:18:04 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.3,PodIP:10.244.1.9,StartTime:2020-05-02 12:18:04 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-05-02 12:18:12 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://b22229bfab61f47ef174aa357d74a38fcb3c33d9df36c5b6986f4c211e384a7f}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 2 12:18:18.609: INFO: Pod "nginx-deployment-85ddf47c5d-vqm8h" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-vqm8h,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-gfx8m,SelfLink:/api/v1/namespaces/e2e-tests-deployment-gfx8m/pods/nginx-deployment-85ddf47c5d-vqm8h,UID:0197ca92-8c6f-11ea-99e8-0242ac110002,ResourceVersion:8350520,Generation:0,CreationTimestamp:2020-05-02 12:18:18 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d f9247ab3-8c6e-11ea-99e8-0242ac110002 0xc001f90407 0xc001f90408}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-66zj8 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-66zj8,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-66zj8 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001f90480} {node.kubernetes.io/unreachable Exists NoExecute 0xc001f904a0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-02 12:18:18 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 2 12:18:18.609: INFO: Pod "nginx-deployment-85ddf47c5d-vrl2b" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-vrl2b,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-gfx8m,SelfLink:/api/v1/namespaces/e2e-tests-deployment-gfx8m/pods/nginx-deployment-85ddf47c5d-vrl2b,UID:f92a9d19-8c6e-11ea-99e8-0242ac110002,ResourceVersion:8350367,Generation:0,CreationTimestamp:2020-05-02 12:18:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d f9247ab3-8c6e-11ea-99e8-0242ac110002 0xc001f90517 0xc001f90518}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-66zj8 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-66zj8,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-66zj8 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001f905a0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001f905c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-02 12:18:04 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-02 12:18:10 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-02 12:18:10 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-02 12:18:04 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.4,PodIP:10.244.2.15,StartTime:2020-05-02 12:18:04 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-05-02 12:18:10 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://35b8b1f67a1be772acac61d17e216c29fc5d3a4f590ff464eed89a0c44218a50}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 2 12:18:18.609: INFO: Pod "nginx-deployment-85ddf47c5d-x7vxg" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-x7vxg,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-gfx8m,SelfLink:/api/v1/namespaces/e2e-tests-deployment-gfx8m/pods/nginx-deployment-85ddf47c5d-x7vxg,UID:019b8048-8c6f-11ea-99e8-0242ac110002,ResourceVersion:8350526,Generation:0,CreationTimestamp:2020-05-02 12:18:18 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d f9247ab3-8c6e-11ea-99e8-0242ac110002 0xc001f90687 0xc001f90688}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-66zj8 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-66zj8,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-66zj8 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001f90700} {node.kubernetes.io/unreachable Exists NoExecute 0xc001f90720}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-02 12:18:18 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 2 12:18:18.609: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-deployment-gfx8m" for this suite. May 2 12:18:40.828: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 12:18:40.852: INFO: namespace: e2e-tests-deployment-gfx8m, resource: bindings, ignored listing per whitelist May 2 12:18:40.902: INFO: namespace e2e-tests-deployment-gfx8m deletion completed in 22.143034081s • [SLOW TEST:36.809 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run pod should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 2 12:18:40.902: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1527 [It] should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine May 2 12:18:41.066: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=e2e-tests-kubectl-zdc84' May 2 12:18:41.214: INFO: stderr: "" May 2 12:18:41.214: INFO: stdout: "pod/e2e-test-nginx-pod created\n" STEP: verifying the pod e2e-test-nginx-pod was created [AfterEach] [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1532 May 2 12:18:41.219: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=e2e-tests-kubectl-zdc84' May 2 12:18:51.268: INFO: stderr: "" May 2 12:18:51.268: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 2 12:18:51.268: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-zdc84" for this suite. May 2 12:18:57.330: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 12:18:57.344: INFO: namespace: e2e-tests-kubectl-zdc84, resource: bindings, ignored listing per whitelist May 2 12:18:57.423: INFO: namespace e2e-tests-kubectl-zdc84 deletion completed in 6.108087402s • [SLOW TEST:16.521 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 2 12:18:57.424: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65 [It] deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 2 12:18:57.542: INFO: Pod name rollover-pod: Found 0 pods out of 1 May 2 12:19:02.546: INFO: Pod name rollover-pod: Found 1 pods out of 1 STEP: ensuring each pod is running May 2 12:19:04.559: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready May 2 12:19:06.578: INFO: Creating deployment "test-rollover-deployment" May 2 12:19:06.594: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations May 2 12:19:08.600: INFO: Check revision of new replica set for deployment "test-rollover-deployment" May 2 12:19:08.605: INFO: Ensure that both replica sets have 1 created replica May 2 12:19:08.611: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update May 2 12:19:08.618: INFO: Updating deployment test-rollover-deployment May 2 12:19:08.618: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller May 2 12:19:10.668: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 May 2 12:19:10.729: INFO: Make sure deployment "test-rollover-deployment" is complete May 2 12:19:10.736: INFO: all replica sets need to contain the pod-template-hash label May 2 12:19:10.736: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724018746, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724018746, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724018748, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724018746, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} May 2 12:19:12.743: INFO: all replica sets need to contain the pod-template-hash label May 2 12:19:12.743: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724018746, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724018746, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724018751, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724018746, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} May 2 12:19:14.743: INFO: all replica sets need to contain the pod-template-hash label May 2 12:19:14.743: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724018746, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724018746, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724018751, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724018746, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} May 2 12:19:16.744: INFO: all replica sets need to contain the pod-template-hash label May 2 12:19:16.744: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724018746, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724018746, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724018751, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724018746, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} May 2 12:19:18.743: INFO: all replica sets need to contain the pod-template-hash label May 2 12:19:18.743: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724018746, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724018746, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724018751, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724018746, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} May 2 12:19:20.751: INFO: all replica sets need to contain the pod-template-hash label May 2 12:19:20.751: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724018746, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724018746, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724018751, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724018746, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} May 2 12:19:22.909: INFO: May 2 12:19:22.909: INFO: Ensure that both old replica sets have no replicas [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59 May 2 12:19:22.916: INFO: Deployment "test-rollover-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment,GenerateName:,Namespace:e2e-tests-deployment-x4jbn,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-x4jbn/deployments/test-rollover-deployment,UID:1e4c50ac-8c6f-11ea-99e8-0242ac110002,ResourceVersion:8350948,Generation:2,CreationTimestamp:2020-05-02 12:19:06 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-05-02 12:19:06 +0000 UTC 2020-05-02 12:19:06 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-05-02 12:19:22 +0000 UTC 2020-05-02 12:19:06 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rollover-deployment-5b8479fdb6" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},} May 2 12:19:22.918: INFO: New ReplicaSet "test-rollover-deployment-5b8479fdb6" of Deployment "test-rollover-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-5b8479fdb6,GenerateName:,Namespace:e2e-tests-deployment-x4jbn,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-x4jbn/replicasets/test-rollover-deployment-5b8479fdb6,UID:1f83a5c1-8c6f-11ea-99e8-0242ac110002,ResourceVersion:8350939,Generation:2,CreationTimestamp:2020-05-02 12:19:08 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 1e4c50ac-8c6f-11ea-99e8-0242ac110002 0xc002902e87 0xc002902e88}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} May 2 12:19:22.918: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": May 2 12:19:22.918: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-controller,GenerateName:,Namespace:e2e-tests-deployment-x4jbn,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-x4jbn/replicasets/test-rollover-controller,UID:18e80a43-8c6f-11ea-99e8-0242ac110002,ResourceVersion:8350947,Generation:2,CreationTimestamp:2020-05-02 12:18:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 1e4c50ac-8c6f-11ea-99e8-0242ac110002 0xc002902cf7 0xc002902cf8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} May 2 12:19:22.918: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-58494b7559,GenerateName:,Namespace:e2e-tests-deployment-x4jbn,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-x4jbn/replicasets/test-rollover-deployment-58494b7559,UID:1e4ff9b7-8c6f-11ea-99e8-0242ac110002,ResourceVersion:8350904,Generation:2,CreationTimestamp:2020-05-02 12:19:06 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 58494b7559,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 1e4c50ac-8c6f-11ea-99e8-0242ac110002 0xc002902db7 0xc002902db8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 58494b7559,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 58494b7559,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} May 2 12:19:22.921: INFO: Pod "test-rollover-deployment-5b8479fdb6-hbnrm" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-5b8479fdb6-hbnrm,GenerateName:test-rollover-deployment-5b8479fdb6-,Namespace:e2e-tests-deployment-x4jbn,SelfLink:/api/v1/namespaces/e2e-tests-deployment-x4jbn/pods/test-rollover-deployment-5b8479fdb6-hbnrm,UID:1f931a8d-8c6f-11ea-99e8-0242ac110002,ResourceVersion:8350917,Generation:0,CreationTimestamp:2020-05-02 12:19:08 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rollover-deployment-5b8479fdb6 1f83a5c1-8c6f-11ea-99e8-0242ac110002 0xc001d4be27 0xc001d4be28}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-nm4sg {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-nm4sg,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-nm4sg true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001d4bea0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001d4bec0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-02 12:19:08 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-02 12:19:11 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-02 12:19:11 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-02 12:19:08 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.4,PodIP:10.244.2.35,StartTime:2020-05-02 12:19:08 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-05-02 12:19:11 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 containerd://94f2e4ca3f9674b228336f802ebc6d76f88cf53989dafe345c5d4b568dad6488}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 2 12:19:22.921: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-deployment-x4jbn" for this suite. May 2 12:19:30.948: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 12:19:30.965: INFO: namespace: e2e-tests-deployment-x4jbn, resource: bindings, ignored listing per whitelist May 2 12:19:31.052: INFO: namespace e2e-tests-deployment-x4jbn deletion completed in 8.128215129s • [SLOW TEST:33.629 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 2 12:19:31.053: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0666 on tmpfs May 2 12:19:31.168: INFO: Waiting up to 5m0s for pod "pod-2cef5f1d-8c6f-11ea-8045-0242ac110017" in namespace "e2e-tests-emptydir-psffl" to be "success or failure" May 2 12:19:31.176: INFO: Pod "pod-2cef5f1d-8c6f-11ea-8045-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 7.763043ms May 2 12:19:33.183: INFO: Pod "pod-2cef5f1d-8c6f-11ea-8045-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015458558s May 2 12:19:35.188: INFO: Pod "pod-2cef5f1d-8c6f-11ea-8045-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.019676861s STEP: Saw pod success May 2 12:19:35.188: INFO: Pod "pod-2cef5f1d-8c6f-11ea-8045-0242ac110017" satisfied condition "success or failure" May 2 12:19:35.191: INFO: Trying to get logs from node hunter-worker pod pod-2cef5f1d-8c6f-11ea-8045-0242ac110017 container test-container: STEP: delete the pod May 2 12:19:35.243: INFO: Waiting for pod pod-2cef5f1d-8c6f-11ea-8045-0242ac110017 to disappear May 2 12:19:35.260: INFO: Pod pod-2cef5f1d-8c6f-11ea-8045-0242ac110017 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 2 12:19:35.260: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-psffl" for this suite. May 2 12:19:41.379: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 12:19:41.416: INFO: namespace: e2e-tests-emptydir-psffl, resource: bindings, ignored listing per whitelist May 2 12:19:41.456: INFO: namespace e2e-tests-emptydir-psffl deletion completed in 6.192346151s • [SLOW TEST:10.403 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 2 12:19:41.456: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-map-3324e90c-8c6f-11ea-8045-0242ac110017 STEP: Creating a pod to test consume configMaps May 2 12:19:41.575: INFO: Waiting up to 5m0s for pod "pod-configmaps-3327080b-8c6f-11ea-8045-0242ac110017" in namespace "e2e-tests-configmap-4j959" to be "success or failure" May 2 12:19:41.603: INFO: Pod "pod-configmaps-3327080b-8c6f-11ea-8045-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 28.25136ms May 2 12:19:43.607: INFO: Pod "pod-configmaps-3327080b-8c6f-11ea-8045-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032222681s May 2 12:19:45.611: INFO: Pod "pod-configmaps-3327080b-8c6f-11ea-8045-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.036279561s STEP: Saw pod success May 2 12:19:45.611: INFO: Pod "pod-configmaps-3327080b-8c6f-11ea-8045-0242ac110017" satisfied condition "success or failure" May 2 12:19:45.613: INFO: Trying to get logs from node hunter-worker pod pod-configmaps-3327080b-8c6f-11ea-8045-0242ac110017 container configmap-volume-test: STEP: delete the pod May 2 12:19:45.646: INFO: Waiting for pod pod-configmaps-3327080b-8c6f-11ea-8045-0242ac110017 to disappear May 2 12:19:45.695: INFO: Pod pod-configmaps-3327080b-8c6f-11ea-8045-0242ac110017 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 2 12:19:45.695: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-4j959" for this suite. May 2 12:19:51.720: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 12:19:51.779: INFO: namespace: e2e-tests-configmap-4j959, resource: bindings, ignored listing per whitelist May 2 12:19:51.809: INFO: namespace e2e-tests-configmap-4j959 deletion completed in 6.108702237s • [SLOW TEST:10.353 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 2 12:19:51.809: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 2 12:19:55.982: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubelet-test-jdm9s" for this suite. May 2 12:20:02.052: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 12:20:02.125: INFO: namespace: e2e-tests-kubelet-test-jdm9s, resource: bindings, ignored listing per whitelist May 2 12:20:02.125: INFO: namespace e2e-tests-kubelet-test-jdm9s deletion completed in 6.13726878s • [SLOW TEST:10.316 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78 should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 2 12:20:02.126: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin May 2 12:20:02.214: INFO: Waiting up to 5m0s for pod "downwardapi-volume-3f74827b-8c6f-11ea-8045-0242ac110017" in namespace "e2e-tests-downward-api-57sst" to be "success or failure" May 2 12:20:02.227: INFO: Pod "downwardapi-volume-3f74827b-8c6f-11ea-8045-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 12.801575ms May 2 12:20:04.353: INFO: Pod "downwardapi-volume-3f74827b-8c6f-11ea-8045-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.138782431s May 2 12:20:06.357: INFO: Pod "downwardapi-volume-3f74827b-8c6f-11ea-8045-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.143122238s STEP: Saw pod success May 2 12:20:06.357: INFO: Pod "downwardapi-volume-3f74827b-8c6f-11ea-8045-0242ac110017" satisfied condition "success or failure" May 2 12:20:06.360: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-3f74827b-8c6f-11ea-8045-0242ac110017 container client-container: STEP: delete the pod May 2 12:20:06.473: INFO: Waiting for pod downwardapi-volume-3f74827b-8c6f-11ea-8045-0242ac110017 to disappear May 2 12:20:06.493: INFO: Pod downwardapi-volume-3f74827b-8c6f-11ea-8045-0242ac110017 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 2 12:20:06.493: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-57sst" for this suite. May 2 12:20:12.511: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 12:20:12.520: INFO: namespace: e2e-tests-downward-api-57sst, resource: bindings, ignored listing per whitelist May 2 12:20:12.595: INFO: namespace e2e-tests-downward-api-57sst deletion completed in 6.099400144s • [SLOW TEST:10.469 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 2 12:20:12.596: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: validating cluster-info May 2 12:20:12.679: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config cluster-info' May 2 12:20:15.115: INFO: stderr: "" May 2 12:20:15.115: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32768\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32768/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 2 12:20:15.115: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-8x42h" for this suite. May 2 12:20:21.150: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 12:20:21.172: INFO: namespace: e2e-tests-kubectl-8x42h, resource: bindings, ignored listing per whitelist May 2 12:20:21.231: INFO: namespace e2e-tests-kubectl-8x42h deletion completed in 6.112245854s • [SLOW TEST:8.635 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl cluster-info /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 2 12:20:21.231: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-4ad6c6f5-8c6f-11ea-8045-0242ac110017 STEP: Creating a pod to test consume secrets May 2 12:20:21.419: INFO: Waiting up to 5m0s for pod "pod-secrets-4ae568fe-8c6f-11ea-8045-0242ac110017" in namespace "e2e-tests-secrets-bvbjs" to be "success or failure" May 2 12:20:21.422: INFO: Pod "pod-secrets-4ae568fe-8c6f-11ea-8045-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 3.091968ms May 2 12:20:23.427: INFO: Pod "pod-secrets-4ae568fe-8c6f-11ea-8045-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007447282s May 2 12:20:25.431: INFO: Pod "pod-secrets-4ae568fe-8c6f-11ea-8045-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011908293s STEP: Saw pod success May 2 12:20:25.431: INFO: Pod "pod-secrets-4ae568fe-8c6f-11ea-8045-0242ac110017" satisfied condition "success or failure" May 2 12:20:25.434: INFO: Trying to get logs from node hunter-worker pod pod-secrets-4ae568fe-8c6f-11ea-8045-0242ac110017 container secret-volume-test: STEP: delete the pod May 2 12:20:25.472: INFO: Waiting for pod pod-secrets-4ae568fe-8c6f-11ea-8045-0242ac110017 to disappear May 2 12:20:25.501: INFO: Pod pod-secrets-4ae568fe-8c6f-11ea-8045-0242ac110017 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 2 12:20:25.501: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-bvbjs" for this suite. May 2 12:20:31.522: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 12:20:31.546: INFO: namespace: e2e-tests-secrets-bvbjs, resource: bindings, ignored listing per whitelist May 2 12:20:31.606: INFO: namespace e2e-tests-secrets-bvbjs deletion completed in 6.100374476s STEP: Destroying namespace "e2e-tests-secret-namespace-nzjkz" for this suite. May 2 12:20:37.683: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 12:20:37.772: INFO: namespace: e2e-tests-secret-namespace-nzjkz, resource: bindings, ignored listing per whitelist May 2 12:20:37.779: INFO: namespace e2e-tests-secret-namespace-nzjkz deletion completed in 6.17290261s • [SLOW TEST:16.548 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 2 12:20:37.779: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79 May 2 12:20:37.894: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 2 12:20:37.902: INFO: Waiting for terminating namespaces to be deleted... May 2 12:20:37.904: INFO: Logging pods the kubelet thinks is on node hunter-worker before test May 2 12:20:37.924: INFO: kube-proxy-szbng from kube-system started at 2020-03-15 18:23:11 +0000 UTC (1 container statuses recorded) May 2 12:20:37.924: INFO: Container kube-proxy ready: true, restart count 0 May 2 12:20:37.924: INFO: kindnet-54h7m from kube-system started at 2020-03-15 18:23:12 +0000 UTC (1 container statuses recorded) May 2 12:20:37.924: INFO: Container kindnet-cni ready: true, restart count 0 May 2 12:20:37.924: INFO: coredns-54ff9cd656-4h7lb from kube-system started at 2020-03-15 18:23:32 +0000 UTC (1 container statuses recorded) May 2 12:20:37.924: INFO: Container coredns ready: true, restart count 0 May 2 12:20:37.924: INFO: Logging pods the kubelet thinks is on node hunter-worker2 before test May 2 12:20:37.930: INFO: kindnet-mtqrs from kube-system started at 2020-03-15 18:23:12 +0000 UTC (1 container statuses recorded) May 2 12:20:37.930: INFO: Container kindnet-cni ready: true, restart count 0 May 2 12:20:37.930: INFO: coredns-54ff9cd656-8vrkk from kube-system started at 2020-03-15 18:23:32 +0000 UTC (1 container statuses recorded) May 2 12:20:37.931: INFO: Container coredns ready: true, restart count 0 May 2 12:20:37.931: INFO: kube-proxy-s52ll from kube-system started at 2020-03-15 18:23:12 +0000 UTC (1 container statuses recorded) May 2 12:20:37.931: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.160b356d7d2aaea7], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match node selector.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 2 12:20:38.950: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-sched-pred-drqf8" for this suite. May 2 12:20:44.972: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 12:20:45.003: INFO: namespace: e2e-tests-sched-pred-drqf8, resource: bindings, ignored listing per whitelist May 2 12:20:45.042: INFO: namespace e2e-tests-sched-pred-drqf8 deletion completed in 6.088265225s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70 • [SLOW TEST:7.264 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22 validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 2 12:20:45.043: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:204 [It] should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying QOS class is set on the pod [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 2 12:20:45.173: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-zs7p5" for this suite. May 2 12:21:07.222: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 12:21:07.270: INFO: namespace: e2e-tests-pods-zs7p5, resource: bindings, ignored listing per whitelist May 2 12:21:07.327: INFO: namespace e2e-tests-pods-zs7p5 deletion completed in 22.147565618s • [SLOW TEST:22.285 seconds] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-apps] ReplicationController should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 2 12:21:07.328: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Given a ReplicationController is created STEP: When the matched label of one of its pods change May 2 12:21:07.444: INFO: Pod name pod-release: Found 0 pods out of 1 May 2 12:21:12.450: INFO: Pod name pod-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 2 12:21:13.471: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-replication-controller-mx87d" for this suite. May 2 12:21:19.542: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 12:21:19.552: INFO: namespace: e2e-tests-replication-controller-mx87d, resource: bindings, ignored listing per whitelist May 2 12:21:19.617: INFO: namespace e2e-tests-replication-controller-mx87d deletion completed in 6.14106942s • [SLOW TEST:12.289 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 2 12:21:19.617: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 2 12:21:39.862: INFO: Container started at 2020-05-02 12:21:22 +0000 UTC, pod became ready at 2020-05-02 12:21:39 +0000 UTC [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 2 12:21:39.862: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-7lhq6" for this suite. May 2 12:22:01.883: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 12:22:01.955: INFO: namespace: e2e-tests-container-probe-7lhq6, resource: bindings, ignored listing per whitelist May 2 12:22:01.963: INFO: namespace e2e-tests-container-probe-7lhq6 deletion completed in 22.09699936s • [SLOW TEST:42.346 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 2 12:22:01.963: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 2 12:22:02.121: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"86ea07ae-8c6f-11ea-99e8-0242ac110002", Controller:(*bool)(0xc0029b41f2), BlockOwnerDeletion:(*bool)(0xc0029b41f3)}} May 2 12:22:02.157: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"86e53b63-8c6f-11ea-99e8-0242ac110002", Controller:(*bool)(0xc002846db2), BlockOwnerDeletion:(*bool)(0xc002846db3)}} May 2 12:22:02.210: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"86e5cadf-8c6f-11ea-99e8-0242ac110002", Controller:(*bool)(0xc002a51c72), BlockOwnerDeletion:(*bool)(0xc002a51c73)}} [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 2 12:22:07.250: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-zgqt8" for this suite. May 2 12:22:13.267: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 12:22:13.331: INFO: namespace: e2e-tests-gc-zgqt8, resource: bindings, ignored listing per whitelist May 2 12:22:13.358: INFO: namespace e2e-tests-gc-zgqt8 deletion completed in 6.104022383s • [SLOW TEST:11.395 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 2 12:22:13.359: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-8daf15e6-8c6f-11ea-8045-0242ac110017 STEP: Creating a pod to test consume secrets May 2 12:22:13.481: INFO: Waiting up to 5m0s for pod "pod-secrets-8db21422-8c6f-11ea-8045-0242ac110017" in namespace "e2e-tests-secrets-lqxt8" to be "success or failure" May 2 12:22:13.485: INFO: Pod "pod-secrets-8db21422-8c6f-11ea-8045-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 3.850044ms May 2 12:22:15.489: INFO: Pod "pod-secrets-8db21422-8c6f-11ea-8045-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007900086s May 2 12:22:17.492: INFO: Pod "pod-secrets-8db21422-8c6f-11ea-8045-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011221362s STEP: Saw pod success May 2 12:22:17.492: INFO: Pod "pod-secrets-8db21422-8c6f-11ea-8045-0242ac110017" satisfied condition "success or failure" May 2 12:22:17.495: INFO: Trying to get logs from node hunter-worker pod pod-secrets-8db21422-8c6f-11ea-8045-0242ac110017 container secret-volume-test: STEP: delete the pod May 2 12:22:17.674: INFO: Waiting for pod pod-secrets-8db21422-8c6f-11ea-8045-0242ac110017 to disappear May 2 12:22:17.710: INFO: Pod pod-secrets-8db21422-8c6f-11ea-8045-0242ac110017 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 2 12:22:17.710: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-lqxt8" for this suite. May 2 12:22:24.196: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 12:22:24.227: INFO: namespace: e2e-tests-secrets-lqxt8, resource: bindings, ignored listing per whitelist May 2 12:22:24.275: INFO: namespace e2e-tests-secrets-lqxt8 deletion completed in 6.561904398s • [SLOW TEST:10.916 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 2 12:22:24.275: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook May 2 12:22:32.486: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 2 12:22:32.534: INFO: Pod pod-with-poststart-exec-hook still exists May 2 12:22:34.534: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 2 12:22:34.538: INFO: Pod pod-with-poststart-exec-hook still exists May 2 12:22:36.534: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 2 12:22:36.538: INFO: Pod pod-with-poststart-exec-hook still exists May 2 12:22:38.534: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 2 12:22:38.539: INFO: Pod pod-with-poststart-exec-hook still exists May 2 12:22:40.534: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 2 12:22:40.538: INFO: Pod pod-with-poststart-exec-hook still exists May 2 12:22:42.534: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 2 12:22:42.539: INFO: Pod pod-with-poststart-exec-hook still exists May 2 12:22:44.534: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 2 12:22:44.538: INFO: Pod pod-with-poststart-exec-hook still exists May 2 12:22:46.534: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 2 12:22:46.539: INFO: Pod pod-with-poststart-exec-hook still exists May 2 12:22:48.534: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 2 12:22:48.539: INFO: Pod pod-with-poststart-exec-hook still exists May 2 12:22:50.534: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 2 12:22:50.539: INFO: Pod pod-with-poststart-exec-hook still exists May 2 12:22:52.534: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 2 12:22:52.540: INFO: Pod pod-with-poststart-exec-hook still exists May 2 12:22:54.534: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 2 12:22:54.546: INFO: Pod pod-with-poststart-exec-hook still exists May 2 12:22:56.534: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 2 12:22:56.539: INFO: Pod pod-with-poststart-exec-hook still exists May 2 12:22:58.534: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 2 12:22:58.539: INFO: Pod pod-with-poststart-exec-hook still exists May 2 12:23:00.534: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 2 12:23:00.538: INFO: Pod pod-with-poststart-exec-hook still exists May 2 12:23:02.534: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 2 12:23:02.538: INFO: Pod pod-with-poststart-exec-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 2 12:23:02.538: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-fm5hr" for this suite. May 2 12:23:16.552: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 12:23:16.583: INFO: namespace: e2e-tests-container-lifecycle-hook-fm5hr, resource: bindings, ignored listing per whitelist May 2 12:23:16.631: INFO: namespace e2e-tests-container-lifecycle-hook-fm5hr deletion completed in 14.089222909s • [SLOW TEST:52.356 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40 should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 2 12:23:16.631: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a pod in the namespace STEP: Waiting for the pod to have running status STEP: Creating an uninitialized pod in the namespace May 2 12:23:20.853: INFO: error from create uninitialized namespace: STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there are no pods in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 2 12:23:44.932: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-namespaces-wdkm7" for this suite. May 2 12:23:50.969: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 12:23:51.038: INFO: namespace: e2e-tests-namespaces-wdkm7, resource: bindings, ignored listing per whitelist May 2 12:23:51.048: INFO: namespace e2e-tests-namespaces-wdkm7 deletion completed in 6.113811956s STEP: Destroying namespace "e2e-tests-nsdeletetest-6sc6t" for this suite. May 2 12:23:51.050: INFO: Namespace e2e-tests-nsdeletetest-6sc6t was already deleted STEP: Destroying namespace "e2e-tests-nsdeletetest-twqq9" for this suite. May 2 12:23:57.076: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 12:23:57.108: INFO: namespace: e2e-tests-nsdeletetest-twqq9, resource: bindings, ignored listing per whitelist May 2 12:23:57.158: INFO: namespace e2e-tests-nsdeletetest-twqq9 deletion completed in 6.107639344s • [SLOW TEST:40.527 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run deployment should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 2 12:23:57.158: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1399 [It] should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine May 2 12:23:57.296: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --generator=deployment/v1beta1 --namespace=e2e-tests-kubectl-srj5j' May 2 12:23:57.393: INFO: stderr: "kubectl run --generator=deployment/v1beta1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" May 2 12:23:57.393: INFO: stdout: "deployment.extensions/e2e-test-nginx-deployment created\n" STEP: verifying the deployment e2e-test-nginx-deployment was created STEP: verifying the pod controlled by deployment e2e-test-nginx-deployment was created [AfterEach] [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1404 May 2 12:24:01.446: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=e2e-tests-kubectl-srj5j' May 2 12:24:02.301: INFO: stderr: "" May 2 12:24:02.301: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 2 12:24:02.301: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-srj5j" for this suite. May 2 12:24:08.365: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 12:24:08.408: INFO: namespace: e2e-tests-kubectl-srj5j, resource: bindings, ignored listing per whitelist May 2 12:24:08.440: INFO: namespace e2e-tests-kubectl-srj5j deletion completed in 6.135062083s • [SLOW TEST:11.282 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 2 12:24:08.441: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79 May 2 12:24:08.561: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 2 12:24:08.595: INFO: Waiting for terminating namespaces to be deleted... May 2 12:24:08.598: INFO: Logging pods the kubelet thinks is on node hunter-worker before test May 2 12:24:08.604: INFO: kube-proxy-szbng from kube-system started at 2020-03-15 18:23:11 +0000 UTC (1 container statuses recorded) May 2 12:24:08.604: INFO: Container kube-proxy ready: true, restart count 0 May 2 12:24:08.604: INFO: kindnet-54h7m from kube-system started at 2020-03-15 18:23:12 +0000 UTC (1 container statuses recorded) May 2 12:24:08.604: INFO: Container kindnet-cni ready: true, restart count 0 May 2 12:24:08.604: INFO: coredns-54ff9cd656-4h7lb from kube-system started at 2020-03-15 18:23:32 +0000 UTC (1 container statuses recorded) May 2 12:24:08.604: INFO: Container coredns ready: true, restart count 0 May 2 12:24:08.604: INFO: Logging pods the kubelet thinks is on node hunter-worker2 before test May 2 12:24:08.609: INFO: coredns-54ff9cd656-8vrkk from kube-system started at 2020-03-15 18:23:32 +0000 UTC (1 container statuses recorded) May 2 12:24:08.609: INFO: Container coredns ready: true, restart count 0 May 2 12:24:08.609: INFO: kindnet-mtqrs from kube-system started at 2020-03-15 18:23:12 +0000 UTC (1 container statuses recorded) May 2 12:24:08.609: INFO: Container kindnet-cni ready: true, restart count 0 May 2 12:24:08.609: INFO: kube-proxy-s52ll from kube-system started at 2020-03-15 18:23:12 +0000 UTC (1 container statuses recorded) May 2 12:24:08.609: INFO: Container kube-proxy ready: true, restart count 0 [It] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: verifying the node has the label node hunter-worker STEP: verifying the node has the label node hunter-worker2 May 2 12:24:08.683: INFO: Pod coredns-54ff9cd656-4h7lb requesting resource cpu=100m on Node hunter-worker May 2 12:24:08.683: INFO: Pod coredns-54ff9cd656-8vrkk requesting resource cpu=100m on Node hunter-worker2 May 2 12:24:08.683: INFO: Pod kindnet-54h7m requesting resource cpu=100m on Node hunter-worker May 2 12:24:08.683: INFO: Pod kindnet-mtqrs requesting resource cpu=100m on Node hunter-worker2 May 2 12:24:08.683: INFO: Pod kube-proxy-s52ll requesting resource cpu=0m on Node hunter-worker2 May 2 12:24:08.683: INFO: Pod kube-proxy-szbng requesting resource cpu=0m on Node hunter-worker STEP: Starting Pods to consume most of the cluster CPU. STEP: Creating another pod that requires unavailable amount of CPU. STEP: Considering event: Type = [Normal], Name = [filler-pod-d25dab83-8c6f-11ea-8045-0242ac110017.160b359e9055533f], Reason = [Scheduled], Message = [Successfully assigned e2e-tests-sched-pred-zq97b/filler-pod-d25dab83-8c6f-11ea-8045-0242ac110017 to hunter-worker] STEP: Considering event: Type = [Normal], Name = [filler-pod-d25dab83-8c6f-11ea-8045-0242ac110017.160b359ede03d113], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-d25dab83-8c6f-11ea-8045-0242ac110017.160b359f2d9e3f31], Reason = [Created], Message = [Created container] STEP: Considering event: Type = [Normal], Name = [filler-pod-d25dab83-8c6f-11ea-8045-0242ac110017.160b359f452d74d7], Reason = [Started], Message = [Started container] STEP: Considering event: Type = [Normal], Name = [filler-pod-d25e67b2-8c6f-11ea-8045-0242ac110017.160b359e91bcae9f], Reason = [Scheduled], Message = [Successfully assigned e2e-tests-sched-pred-zq97b/filler-pod-d25e67b2-8c6f-11ea-8045-0242ac110017 to hunter-worker2] STEP: Considering event: Type = [Normal], Name = [filler-pod-d25e67b2-8c6f-11ea-8045-0242ac110017.160b359f36e17905], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-d25e67b2-8c6f-11ea-8045-0242ac110017.160b359f5e441605], Reason = [Created], Message = [Created container] STEP: Considering event: Type = [Normal], Name = [filler-pod-d25e67b2-8c6f-11ea-8045-0242ac110017.160b359f6bd22f8c], Reason = [Started], Message = [Started container] STEP: Considering event: Type = [Warning], Name = [additional-pod.160b359f810f3c14], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taints that the pod didn't tolerate, 2 Insufficient cpu.] STEP: removing the label node off the node hunter-worker STEP: verifying the node doesn't have the label node STEP: removing the label node off the node hunter-worker2 STEP: verifying the node doesn't have the label node [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 2 12:24:13.883: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-sched-pred-zq97b" for this suite. May 2 12:24:19.972: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 12:24:20.002: INFO: namespace: e2e-tests-sched-pred-zq97b, resource: bindings, ignored listing per whitelist May 2 12:24:20.048: INFO: namespace e2e-tests-sched-pred-zq97b deletion completed in 6.157969259s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70 • [SLOW TEST:11.607 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22 validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 2 12:24:20.048: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on default medium should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir volume type on node default medium May 2 12:24:20.199: INFO: Waiting up to 5m0s for pod "pod-d934a071-8c6f-11ea-8045-0242ac110017" in namespace "e2e-tests-emptydir-msdz2" to be "success or failure" May 2 12:24:20.202: INFO: Pod "pod-d934a071-8c6f-11ea-8045-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.782048ms May 2 12:24:22.206: INFO: Pod "pod-d934a071-8c6f-11ea-8045-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006690436s May 2 12:24:24.211: INFO: Pod "pod-d934a071-8c6f-11ea-8045-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011185282s STEP: Saw pod success May 2 12:24:24.211: INFO: Pod "pod-d934a071-8c6f-11ea-8045-0242ac110017" satisfied condition "success or failure" May 2 12:24:24.214: INFO: Trying to get logs from node hunter-worker2 pod pod-d934a071-8c6f-11ea-8045-0242ac110017 container test-container: STEP: delete the pod May 2 12:24:24.268: INFO: Waiting for pod pod-d934a071-8c6f-11ea-8045-0242ac110017 to disappear May 2 12:24:24.272: INFO: Pod pod-d934a071-8c6f-11ea-8045-0242ac110017 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 2 12:24:24.273: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-msdz2" for this suite. May 2 12:24:30.299: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 12:24:30.320: INFO: namespace: e2e-tests-emptydir-msdz2, resource: bindings, ignored listing per whitelist May 2 12:24:30.401: INFO: namespace e2e-tests-emptydir-msdz2 deletion completed in 6.125114316s • [SLOW TEST:10.353 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 volume on default medium should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 2 12:24:30.402: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the rs STEP: Gathering metrics W0502 12:25:01.628900 6 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 2 12:25:01.628: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 2 12:25:01.629: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-8tq6w" for this suite. May 2 12:25:09.731: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 12:25:09.751: INFO: namespace: e2e-tests-gc-8tq6w, resource: bindings, ignored listing per whitelist May 2 12:25:09.808: INFO: namespace e2e-tests-gc-8tq6w deletion completed in 8.175824477s • [SLOW TEST:39.407 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 2 12:25:09.809: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test env composition May 2 12:25:09.917: INFO: Waiting up to 5m0s for pod "var-expansion-f6da7062-8c6f-11ea-8045-0242ac110017" in namespace "e2e-tests-var-expansion-4nrxb" to be "success or failure" May 2 12:25:09.926: INFO: Pod "var-expansion-f6da7062-8c6f-11ea-8045-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 9.147521ms May 2 12:25:11.930: INFO: Pod "var-expansion-f6da7062-8c6f-11ea-8045-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012691532s May 2 12:25:13.934: INFO: Pod "var-expansion-f6da7062-8c6f-11ea-8045-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.017159692s STEP: Saw pod success May 2 12:25:13.934: INFO: Pod "var-expansion-f6da7062-8c6f-11ea-8045-0242ac110017" satisfied condition "success or failure" May 2 12:25:13.937: INFO: Trying to get logs from node hunter-worker pod var-expansion-f6da7062-8c6f-11ea-8045-0242ac110017 container dapi-container: STEP: delete the pod May 2 12:25:13.982: INFO: Waiting for pod var-expansion-f6da7062-8c6f-11ea-8045-0242ac110017 to disappear May 2 12:25:14.005: INFO: Pod var-expansion-f6da7062-8c6f-11ea-8045-0242ac110017 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 2 12:25:14.005: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-var-expansion-4nrxb" for this suite. May 2 12:25:20.020: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 12:25:20.030: INFO: namespace: e2e-tests-var-expansion-4nrxb, resource: bindings, ignored listing per whitelist May 2 12:25:20.090: INFO: namespace e2e-tests-var-expansion-4nrxb deletion completed in 6.082256916s • [SLOW TEST:10.282 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Proxy server should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 2 12:25:20.091: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: starting the proxy server May 2 12:25:20.227: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter' STEP: curling proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 2 12:25:20.316: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-458pg" for this suite. May 2 12:25:26.356: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 12:25:26.395: INFO: namespace: e2e-tests-kubectl-458pg, resource: bindings, ignored listing per whitelist May 2 12:25:26.433: INFO: namespace e2e-tests-kubectl-458pg deletion completed in 6.111939012s • [SLOW TEST:6.342 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Proxy server /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl rolling-update should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 2 12:25:26.433: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1358 [It] should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine May 2 12:25:26.552: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=e2e-tests-kubectl-ndwvj' May 2 12:25:26.671: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" May 2 12:25:26.671: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n" STEP: verifying the rc e2e-test-nginx-rc was created May 2 12:25:26.676: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 0 spec.replicas 1 status.replicas 0 May 2 12:25:26.688: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0 STEP: rolling-update to same image controller May 2 12:25:26.696: INFO: scanned /root for discovery docs: May 2 12:25:26.696: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update e2e-test-nginx-rc --update-period=1s --image=docker.io/library/nginx:1.14-alpine --image-pull-policy=IfNotPresent --namespace=e2e-tests-kubectl-ndwvj' May 2 12:25:42.503: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" May 2 12:25:42.503: INFO: stdout: "Created e2e-test-nginx-rc-5f0401b882e9f6d923366d7759e8ed61\nScaling up e2e-test-nginx-rc-5f0401b882e9f6d923366d7759e8ed61 from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-5f0401b882e9f6d923366d7759e8ed61 up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-5f0401b882e9f6d923366d7759e8ed61 to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n" May 2 12:25:42.503: INFO: stdout: "Created e2e-test-nginx-rc-5f0401b882e9f6d923366d7759e8ed61\nScaling up e2e-test-nginx-rc-5f0401b882e9f6d923366d7759e8ed61 from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-5f0401b882e9f6d923366d7759e8ed61 up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-5f0401b882e9f6d923366d7759e8ed61 to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n" STEP: waiting for all containers in run=e2e-test-nginx-rc pods to come up. May 2 12:25:42.503: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=e2e-tests-kubectl-ndwvj' May 2 12:25:42.617: INFO: stderr: "" May 2 12:25:42.617: INFO: stdout: "e2e-test-nginx-rc-5f0401b882e9f6d923366d7759e8ed61-4jglx " May 2 12:25:42.617: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-5f0401b882e9f6d923366d7759e8ed61-4jglx -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-nginx-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-ndwvj' May 2 12:25:42.713: INFO: stderr: "" May 2 12:25:42.713: INFO: stdout: "true" May 2 12:25:42.713: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-5f0401b882e9f6d923366d7759e8ed61-4jglx -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-nginx-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-ndwvj' May 2 12:25:42.803: INFO: stderr: "" May 2 12:25:42.803: INFO: stdout: "docker.io/library/nginx:1.14-alpine" May 2 12:25:42.803: INFO: e2e-test-nginx-rc-5f0401b882e9f6d923366d7759e8ed61-4jglx is verified up and running [AfterEach] [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1364 May 2 12:25:42.803: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=e2e-tests-kubectl-ndwvj' May 2 12:25:42.903: INFO: stderr: "" May 2 12:25:42.903: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 2 12:25:42.903: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-ndwvj" for this suite. May 2 12:25:48.919: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 12:25:48.945: INFO: namespace: e2e-tests-kubectl-ndwvj, resource: bindings, ignored listing per whitelist May 2 12:25:48.995: INFO: namespace e2e-tests-kubectl-ndwvj deletion completed in 6.089253828s • [SLOW TEST:22.562 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 2 12:25:48.995: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74 STEP: Creating service test in namespace e2e-tests-statefulset-t6xwc [It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Initializing watcher for selector baz=blah,foo=bar STEP: Creating stateful set ss in namespace e2e-tests-statefulset-t6xwc STEP: Waiting until all stateful set ss replicas will be running in namespace e2e-tests-statefulset-t6xwc May 2 12:25:49.119: INFO: Found 0 stateful pods, waiting for 1 May 2 12:25:59.135: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod May 2 12:25:59.138: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-t6xwc ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' May 2 12:25:59.398: INFO: stderr: "I0502 12:25:59.270105 3447 log.go:172] (0xc000154790) (0xc0007c5400) Create stream\nI0502 12:25:59.270167 3447 log.go:172] (0xc000154790) (0xc0007c5400) Stream added, broadcasting: 1\nI0502 12:25:59.272748 3447 log.go:172] (0xc000154790) Reply frame received for 1\nI0502 12:25:59.272811 3447 log.go:172] (0xc000154790) (0xc0007c54a0) Create stream\nI0502 12:25:59.272826 3447 log.go:172] (0xc000154790) (0xc0007c54a0) Stream added, broadcasting: 3\nI0502 12:25:59.274070 3447 log.go:172] (0xc000154790) Reply frame received for 3\nI0502 12:25:59.274115 3447 log.go:172] (0xc000154790) (0xc0007c5540) Create stream\nI0502 12:25:59.274133 3447 log.go:172] (0xc000154790) (0xc0007c5540) Stream added, broadcasting: 5\nI0502 12:25:59.275146 3447 log.go:172] (0xc000154790) Reply frame received for 5\nI0502 12:25:59.391141 3447 log.go:172] (0xc000154790) Data frame received for 5\nI0502 12:25:59.391189 3447 log.go:172] (0xc0007c5540) (5) Data frame handling\nI0502 12:25:59.391220 3447 log.go:172] (0xc000154790) Data frame received for 3\nI0502 12:25:59.391235 3447 log.go:172] (0xc0007c54a0) (3) Data frame handling\nI0502 12:25:59.391246 3447 log.go:172] (0xc0007c54a0) (3) Data frame sent\nI0502 12:25:59.391259 3447 log.go:172] (0xc000154790) Data frame received for 3\nI0502 12:25:59.391270 3447 log.go:172] (0xc0007c54a0) (3) Data frame handling\nI0502 12:25:59.393455 3447 log.go:172] (0xc000154790) Data frame received for 1\nI0502 12:25:59.393474 3447 log.go:172] (0xc0007c5400) (1) Data frame handling\nI0502 12:25:59.393483 3447 log.go:172] (0xc0007c5400) (1) Data frame sent\nI0502 12:25:59.393493 3447 log.go:172] (0xc000154790) (0xc0007c5400) Stream removed, broadcasting: 1\nI0502 12:25:59.393511 3447 log.go:172] (0xc000154790) Go away received\nI0502 12:25:59.393786 3447 log.go:172] (0xc000154790) (0xc0007c5400) Stream removed, broadcasting: 1\nI0502 12:25:59.393810 3447 log.go:172] (0xc000154790) (0xc0007c54a0) Stream removed, broadcasting: 3\nI0502 12:25:59.393822 3447 log.go:172] (0xc000154790) (0xc0007c5540) Stream removed, broadcasting: 5\n" May 2 12:25:59.398: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" May 2 12:25:59.398: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' May 2 12:25:59.402: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true May 2 12:26:09.406: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false May 2 12:26:09.406: INFO: Waiting for statefulset status.replicas updated to 0 May 2 12:26:09.431: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999386s May 2 12:26:10.435: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.984614907s May 2 12:26:11.440: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.979722387s May 2 12:26:12.444: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.975111021s May 2 12:26:13.450: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.970906242s May 2 12:26:14.454: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.965424767s May 2 12:26:15.459: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.960985292s May 2 12:26:16.471: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.955785894s May 2 12:26:17.474: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.944537813s May 2 12:26:18.483: INFO: Verifying statefulset ss doesn't scale past 1 for another 940.683534ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace e2e-tests-statefulset-t6xwc May 2 12:26:19.487: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-t6xwc ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 2 12:26:19.704: INFO: stderr: "I0502 12:26:19.623081 3470 log.go:172] (0xc00014c840) (0xc000742640) Create stream\nI0502 12:26:19.623152 3470 log.go:172] (0xc00014c840) (0xc000742640) Stream added, broadcasting: 1\nI0502 12:26:19.626415 3470 log.go:172] (0xc00014c840) Reply frame received for 1\nI0502 12:26:19.626474 3470 log.go:172] (0xc00014c840) (0xc0007426e0) Create stream\nI0502 12:26:19.626492 3470 log.go:172] (0xc00014c840) (0xc0007426e0) Stream added, broadcasting: 3\nI0502 12:26:19.627673 3470 log.go:172] (0xc00014c840) Reply frame received for 3\nI0502 12:26:19.627742 3470 log.go:172] (0xc00014c840) (0xc000672c80) Create stream\nI0502 12:26:19.627761 3470 log.go:172] (0xc00014c840) (0xc000672c80) Stream added, broadcasting: 5\nI0502 12:26:19.628563 3470 log.go:172] (0xc00014c840) Reply frame received for 5\nI0502 12:26:19.688904 3470 log.go:172] (0xc00014c840) Data frame received for 5\nI0502 12:26:19.688966 3470 log.go:172] (0xc000672c80) (5) Data frame handling\nI0502 12:26:19.689006 3470 log.go:172] (0xc00014c840) Data frame received for 3\nI0502 12:26:19.689024 3470 log.go:172] (0xc0007426e0) (3) Data frame handling\nI0502 12:26:19.689067 3470 log.go:172] (0xc0007426e0) (3) Data frame sent\nI0502 12:26:19.689107 3470 log.go:172] (0xc00014c840) Data frame received for 3\nI0502 12:26:19.689348 3470 log.go:172] (0xc0007426e0) (3) Data frame handling\nI0502 12:26:19.691051 3470 log.go:172] (0xc00014c840) Data frame received for 1\nI0502 12:26:19.691139 3470 log.go:172] (0xc000742640) (1) Data frame handling\nI0502 12:26:19.691176 3470 log.go:172] (0xc000742640) (1) Data frame sent\nI0502 12:26:19.691217 3470 log.go:172] (0xc00014c840) (0xc000742640) Stream removed, broadcasting: 1\nI0502 12:26:19.691378 3470 log.go:172] (0xc00014c840) Go away received\nI0502 12:26:19.691450 3470 log.go:172] (0xc00014c840) (0xc000742640) Stream removed, broadcasting: 1\nI0502 12:26:19.691479 3470 log.go:172] (0xc00014c840) (0xc0007426e0) Stream removed, broadcasting: 3\nI0502 12:26:19.691491 3470 log.go:172] (0xc00014c840) (0xc000672c80) Stream removed, broadcasting: 5\n" May 2 12:26:19.704: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" May 2 12:26:19.704: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' May 2 12:26:19.710: INFO: Found 1 stateful pods, waiting for 3 May 2 12:26:29.715: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true May 2 12:26:29.715: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true May 2 12:26:29.715: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Verifying that stateful set ss was scaled up in order STEP: Scale down will halt with unhealthy stateful pod May 2 12:26:29.722: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-t6xwc ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' May 2 12:26:29.944: INFO: stderr: "I0502 12:26:29.858855 3492 log.go:172] (0xc000154840) (0xc00065f360) Create stream\nI0502 12:26:29.858916 3492 log.go:172] (0xc000154840) (0xc00065f360) Stream added, broadcasting: 1\nI0502 12:26:29.860980 3492 log.go:172] (0xc000154840) Reply frame received for 1\nI0502 12:26:29.861017 3492 log.go:172] (0xc000154840) (0xc00065f400) Create stream\nI0502 12:26:29.861025 3492 log.go:172] (0xc000154840) (0xc00065f400) Stream added, broadcasting: 3\nI0502 12:26:29.862031 3492 log.go:172] (0xc000154840) Reply frame received for 3\nI0502 12:26:29.862099 3492 log.go:172] (0xc000154840) (0xc000338000) Create stream\nI0502 12:26:29.862123 3492 log.go:172] (0xc000154840) (0xc000338000) Stream added, broadcasting: 5\nI0502 12:26:29.862934 3492 log.go:172] (0xc000154840) Reply frame received for 5\nI0502 12:26:29.936340 3492 log.go:172] (0xc000154840) Data frame received for 3\nI0502 12:26:29.936399 3492 log.go:172] (0xc00065f400) (3) Data frame handling\nI0502 12:26:29.936427 3492 log.go:172] (0xc00065f400) (3) Data frame sent\nI0502 12:26:29.936458 3492 log.go:172] (0xc000154840) Data frame received for 3\nI0502 12:26:29.936472 3492 log.go:172] (0xc00065f400) (3) Data frame handling\nI0502 12:26:29.936534 3492 log.go:172] (0xc000154840) Data frame received for 5\nI0502 12:26:29.936605 3492 log.go:172] (0xc000338000) (5) Data frame handling\nI0502 12:26:29.939049 3492 log.go:172] (0xc000154840) Data frame received for 1\nI0502 12:26:29.939084 3492 log.go:172] (0xc00065f360) (1) Data frame handling\nI0502 12:26:29.939101 3492 log.go:172] (0xc00065f360) (1) Data frame sent\nI0502 12:26:29.939123 3492 log.go:172] (0xc000154840) (0xc00065f360) Stream removed, broadcasting: 1\nI0502 12:26:29.939157 3492 log.go:172] (0xc000154840) Go away received\nI0502 12:26:29.939416 3492 log.go:172] (0xc000154840) (0xc00065f360) Stream removed, broadcasting: 1\nI0502 12:26:29.939445 3492 log.go:172] (0xc000154840) (0xc00065f400) Stream removed, broadcasting: 3\nI0502 12:26:29.939459 3492 log.go:172] (0xc000154840) (0xc000338000) Stream removed, broadcasting: 5\n" May 2 12:26:29.944: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" May 2 12:26:29.944: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' May 2 12:26:29.944: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-t6xwc ss-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' May 2 12:26:30.172: INFO: stderr: "I0502 12:26:30.068039 3514 log.go:172] (0xc0006d6370) (0xc000655360) Create stream\nI0502 12:26:30.068089 3514 log.go:172] (0xc0006d6370) (0xc000655360) Stream added, broadcasting: 1\nI0502 12:26:30.070778 3514 log.go:172] (0xc0006d6370) Reply frame received for 1\nI0502 12:26:30.070836 3514 log.go:172] (0xc0006d6370) (0xc000358000) Create stream\nI0502 12:26:30.070861 3514 log.go:172] (0xc0006d6370) (0xc000358000) Stream added, broadcasting: 3\nI0502 12:26:30.071846 3514 log.go:172] (0xc0006d6370) Reply frame received for 3\nI0502 12:26:30.071870 3514 log.go:172] (0xc0006d6370) (0xc000655400) Create stream\nI0502 12:26:30.071879 3514 log.go:172] (0xc0006d6370) (0xc000655400) Stream added, broadcasting: 5\nI0502 12:26:30.072754 3514 log.go:172] (0xc0006d6370) Reply frame received for 5\nI0502 12:26:30.164067 3514 log.go:172] (0xc0006d6370) Data frame received for 3\nI0502 12:26:30.164099 3514 log.go:172] (0xc000358000) (3) Data frame handling\nI0502 12:26:30.164120 3514 log.go:172] (0xc000358000) (3) Data frame sent\nI0502 12:26:30.164132 3514 log.go:172] (0xc0006d6370) Data frame received for 3\nI0502 12:26:30.164142 3514 log.go:172] (0xc000358000) (3) Data frame handling\nI0502 12:26:30.164525 3514 log.go:172] (0xc0006d6370) Data frame received for 5\nI0502 12:26:30.164547 3514 log.go:172] (0xc000655400) (5) Data frame handling\nI0502 12:26:30.166691 3514 log.go:172] (0xc0006d6370) Data frame received for 1\nI0502 12:26:30.166737 3514 log.go:172] (0xc000655360) (1) Data frame handling\nI0502 12:26:30.166762 3514 log.go:172] (0xc000655360) (1) Data frame sent\nI0502 12:26:30.166790 3514 log.go:172] (0xc0006d6370) (0xc000655360) Stream removed, broadcasting: 1\nI0502 12:26:30.166824 3514 log.go:172] (0xc0006d6370) Go away received\nI0502 12:26:30.167025 3514 log.go:172] (0xc0006d6370) (0xc000655360) Stream removed, broadcasting: 1\nI0502 12:26:30.167069 3514 log.go:172] (0xc0006d6370) (0xc000358000) Stream removed, broadcasting: 3\nI0502 12:26:30.167089 3514 log.go:172] (0xc0006d6370) (0xc000655400) Stream removed, broadcasting: 5\n" May 2 12:26:30.172: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" May 2 12:26:30.172: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' May 2 12:26:30.172: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-t6xwc ss-2 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' May 2 12:26:30.423: INFO: stderr: "I0502 12:26:30.308723 3537 log.go:172] (0xc00011c580) (0xc0007125a0) Create stream\nI0502 12:26:30.308797 3537 log.go:172] (0xc00011c580) (0xc0007125a0) Stream added, broadcasting: 1\nI0502 12:26:30.311672 3537 log.go:172] (0xc00011c580) Reply frame received for 1\nI0502 12:26:30.311712 3537 log.go:172] (0xc00011c580) (0xc0005d0dc0) Create stream\nI0502 12:26:30.311722 3537 log.go:172] (0xc00011c580) (0xc0005d0dc0) Stream added, broadcasting: 3\nI0502 12:26:30.312522 3537 log.go:172] (0xc00011c580) Reply frame received for 3\nI0502 12:26:30.312553 3537 log.go:172] (0xc00011c580) (0xc000712640) Create stream\nI0502 12:26:30.312561 3537 log.go:172] (0xc00011c580) (0xc000712640) Stream added, broadcasting: 5\nI0502 12:26:30.313463 3537 log.go:172] (0xc00011c580) Reply frame received for 5\nI0502 12:26:30.416328 3537 log.go:172] (0xc00011c580) Data frame received for 3\nI0502 12:26:30.416377 3537 log.go:172] (0xc0005d0dc0) (3) Data frame handling\nI0502 12:26:30.416404 3537 log.go:172] (0xc0005d0dc0) (3) Data frame sent\nI0502 12:26:30.416420 3537 log.go:172] (0xc00011c580) Data frame received for 3\nI0502 12:26:30.416435 3537 log.go:172] (0xc0005d0dc0) (3) Data frame handling\nI0502 12:26:30.417002 3537 log.go:172] (0xc00011c580) Data frame received for 5\nI0502 12:26:30.417030 3537 log.go:172] (0xc000712640) (5) Data frame handling\nI0502 12:26:30.418876 3537 log.go:172] (0xc00011c580) Data frame received for 1\nI0502 12:26:30.418907 3537 log.go:172] (0xc0007125a0) (1) Data frame handling\nI0502 12:26:30.418940 3537 log.go:172] (0xc0007125a0) (1) Data frame sent\nI0502 12:26:30.418963 3537 log.go:172] (0xc00011c580) (0xc0007125a0) Stream removed, broadcasting: 1\nI0502 12:26:30.419041 3537 log.go:172] (0xc00011c580) Go away received\nI0502 12:26:30.419205 3537 log.go:172] (0xc00011c580) (0xc0007125a0) Stream removed, broadcasting: 1\nI0502 12:26:30.419235 3537 log.go:172] (0xc00011c580) (0xc0005d0dc0) Stream removed, broadcasting: 3\nI0502 12:26:30.419248 3537 log.go:172] (0xc00011c580) (0xc000712640) Stream removed, broadcasting: 5\n" May 2 12:26:30.424: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" May 2 12:26:30.424: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' May 2 12:26:30.424: INFO: Waiting for statefulset status.replicas updated to 0 May 2 12:26:30.427: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 3 May 2 12:26:40.436: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false May 2 12:26:40.436: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false May 2 12:26:40.436: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false May 2 12:26:40.450: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999689s May 2 12:26:41.456: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.991655552s May 2 12:26:42.462: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.985744204s May 2 12:26:43.467: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.979865328s May 2 12:26:44.472: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.975022026s May 2 12:26:45.478: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.970071478s May 2 12:26:46.483: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.964152261s May 2 12:26:47.488: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.959012136s May 2 12:26:48.492: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.953875816s May 2 12:26:49.498: INFO: Verifying statefulset ss doesn't scale past 3 for another 949.560935ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacee2e-tests-statefulset-t6xwc May 2 12:26:50.504: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-t6xwc ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 2 12:26:50.755: INFO: stderr: "I0502 12:26:50.664913 3560 log.go:172] (0xc000138160) (0xc00068f360) Create stream\nI0502 12:26:50.664989 3560 log.go:172] (0xc000138160) (0xc00068f360) Stream added, broadcasting: 1\nI0502 12:26:50.667901 3560 log.go:172] (0xc000138160) Reply frame received for 1\nI0502 12:26:50.667950 3560 log.go:172] (0xc000138160) (0xc000502000) Create stream\nI0502 12:26:50.667968 3560 log.go:172] (0xc000138160) (0xc000502000) Stream added, broadcasting: 3\nI0502 12:26:50.669082 3560 log.go:172] (0xc000138160) Reply frame received for 3\nI0502 12:26:50.669274 3560 log.go:172] (0xc000138160) (0xc000116000) Create stream\nI0502 12:26:50.669292 3560 log.go:172] (0xc000138160) (0xc000116000) Stream added, broadcasting: 5\nI0502 12:26:50.670518 3560 log.go:172] (0xc000138160) Reply frame received for 5\nI0502 12:26:50.748176 3560 log.go:172] (0xc000138160) Data frame received for 3\nI0502 12:26:50.748234 3560 log.go:172] (0xc000138160) Data frame received for 5\nI0502 12:26:50.748264 3560 log.go:172] (0xc000116000) (5) Data frame handling\nI0502 12:26:50.748291 3560 log.go:172] (0xc000502000) (3) Data frame handling\nI0502 12:26:50.748308 3560 log.go:172] (0xc000502000) (3) Data frame sent\nI0502 12:26:50.748327 3560 log.go:172] (0xc000138160) Data frame received for 3\nI0502 12:26:50.748358 3560 log.go:172] (0xc000502000) (3) Data frame handling\nI0502 12:26:50.750186 3560 log.go:172] (0xc000138160) Data frame received for 1\nI0502 12:26:50.750217 3560 log.go:172] (0xc00068f360) (1) Data frame handling\nI0502 12:26:50.750237 3560 log.go:172] (0xc00068f360) (1) Data frame sent\nI0502 12:26:50.750252 3560 log.go:172] (0xc000138160) (0xc00068f360) Stream removed, broadcasting: 1\nI0502 12:26:50.750275 3560 log.go:172] (0xc000138160) Go away received\nI0502 12:26:50.750597 3560 log.go:172] (0xc000138160) (0xc00068f360) Stream removed, broadcasting: 1\nI0502 12:26:50.750635 3560 log.go:172] (0xc000138160) (0xc000502000) Stream removed, broadcasting: 3\nI0502 12:26:50.750648 3560 log.go:172] (0xc000138160) (0xc000116000) Stream removed, broadcasting: 5\n" May 2 12:26:50.755: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" May 2 12:26:50.755: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' May 2 12:26:50.755: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-t6xwc ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 2 12:26:50.956: INFO: stderr: "I0502 12:26:50.878822 3583 log.go:172] (0xc0008382c0) (0xc00072a640) Create stream\nI0502 12:26:50.878872 3583 log.go:172] (0xc0008382c0) (0xc00072a640) Stream added, broadcasting: 1\nI0502 12:26:50.881633 3583 log.go:172] (0xc0008382c0) Reply frame received for 1\nI0502 12:26:50.881672 3583 log.go:172] (0xc0008382c0) (0xc000674fa0) Create stream\nI0502 12:26:50.881684 3583 log.go:172] (0xc0008382c0) (0xc000674fa0) Stream added, broadcasting: 3\nI0502 12:26:50.882741 3583 log.go:172] (0xc0008382c0) Reply frame received for 3\nI0502 12:26:50.882816 3583 log.go:172] (0xc0008382c0) (0xc00001e000) Create stream\nI0502 12:26:50.882835 3583 log.go:172] (0xc0008382c0) (0xc00001e000) Stream added, broadcasting: 5\nI0502 12:26:50.883847 3583 log.go:172] (0xc0008382c0) Reply frame received for 5\nI0502 12:26:50.949952 3583 log.go:172] (0xc0008382c0) Data frame received for 5\nI0502 12:26:50.950015 3583 log.go:172] (0xc0008382c0) Data frame received for 3\nI0502 12:26:50.950052 3583 log.go:172] (0xc000674fa0) (3) Data frame handling\nI0502 12:26:50.950075 3583 log.go:172] (0xc000674fa0) (3) Data frame sent\nI0502 12:26:50.950093 3583 log.go:172] (0xc0008382c0) Data frame received for 3\nI0502 12:26:50.950113 3583 log.go:172] (0xc000674fa0) (3) Data frame handling\nI0502 12:26:50.950132 3583 log.go:172] (0xc00001e000) (5) Data frame handling\nI0502 12:26:50.952034 3583 log.go:172] (0xc0008382c0) Data frame received for 1\nI0502 12:26:50.952074 3583 log.go:172] (0xc00072a640) (1) Data frame handling\nI0502 12:26:50.952106 3583 log.go:172] (0xc00072a640) (1) Data frame sent\nI0502 12:26:50.952216 3583 log.go:172] (0xc0008382c0) (0xc00072a640) Stream removed, broadcasting: 1\nI0502 12:26:50.952365 3583 log.go:172] (0xc0008382c0) Go away received\nI0502 12:26:50.952425 3583 log.go:172] (0xc0008382c0) (0xc00072a640) Stream removed, broadcasting: 1\nI0502 12:26:50.952447 3583 log.go:172] (0xc0008382c0) (0xc000674fa0) Stream removed, broadcasting: 3\nI0502 12:26:50.952462 3583 log.go:172] (0xc0008382c0) (0xc00001e000) Stream removed, broadcasting: 5\n" May 2 12:26:50.956: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" May 2 12:26:50.956: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' May 2 12:26:50.956: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-t6xwc ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 2 12:26:51.177: INFO: stderr: "I0502 12:26:51.112585 3605 log.go:172] (0xc000138790) (0xc000724640) Create stream\nI0502 12:26:51.112650 3605 log.go:172] (0xc000138790) (0xc000724640) Stream added, broadcasting: 1\nI0502 12:26:51.115443 3605 log.go:172] (0xc000138790) Reply frame received for 1\nI0502 12:26:51.115482 3605 log.go:172] (0xc000138790) (0xc000624e60) Create stream\nI0502 12:26:51.115493 3605 log.go:172] (0xc000138790) (0xc000624e60) Stream added, broadcasting: 3\nI0502 12:26:51.116443 3605 log.go:172] (0xc000138790) Reply frame received for 3\nI0502 12:26:51.116486 3605 log.go:172] (0xc000138790) (0xc000624fa0) Create stream\nI0502 12:26:51.116497 3605 log.go:172] (0xc000138790) (0xc000624fa0) Stream added, broadcasting: 5\nI0502 12:26:51.117645 3605 log.go:172] (0xc000138790) Reply frame received for 5\nI0502 12:26:51.170744 3605 log.go:172] (0xc000138790) Data frame received for 5\nI0502 12:26:51.170764 3605 log.go:172] (0xc000624fa0) (5) Data frame handling\nI0502 12:26:51.170817 3605 log.go:172] (0xc000138790) Data frame received for 3\nI0502 12:26:51.170862 3605 log.go:172] (0xc000624e60) (3) Data frame handling\nI0502 12:26:51.170887 3605 log.go:172] (0xc000624e60) (3) Data frame sent\nI0502 12:26:51.170903 3605 log.go:172] (0xc000138790) Data frame received for 3\nI0502 12:26:51.170914 3605 log.go:172] (0xc000624e60) (3) Data frame handling\nI0502 12:26:51.172309 3605 log.go:172] (0xc000138790) Data frame received for 1\nI0502 12:26:51.172335 3605 log.go:172] (0xc000724640) (1) Data frame handling\nI0502 12:26:51.172356 3605 log.go:172] (0xc000724640) (1) Data frame sent\nI0502 12:26:51.172372 3605 log.go:172] (0xc000138790) (0xc000724640) Stream removed, broadcasting: 1\nI0502 12:26:51.172396 3605 log.go:172] (0xc000138790) Go away received\nI0502 12:26:51.172696 3605 log.go:172] (0xc000138790) (0xc000724640) Stream removed, broadcasting: 1\nI0502 12:26:51.172723 3605 log.go:172] (0xc000138790) (0xc000624e60) Stream removed, broadcasting: 3\nI0502 12:26:51.172736 3605 log.go:172] (0xc000138790) (0xc000624fa0) Stream removed, broadcasting: 5\n" May 2 12:26:51.177: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" May 2 12:26:51.177: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' May 2 12:26:51.177: INFO: Scaling statefulset ss to 0 STEP: Verifying that stateful set ss was scaled down in reverse order [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85 May 2 12:27:11.195: INFO: Deleting all statefulset in ns e2e-tests-statefulset-t6xwc May 2 12:27:11.198: INFO: Scaling statefulset ss to 0 May 2 12:27:11.207: INFO: Waiting for statefulset status.replicas updated to 0 May 2 12:27:11.217: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 2 12:27:11.231: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-statefulset-t6xwc" for this suite. May 2 12:27:17.269: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 12:27:17.343: INFO: namespace: e2e-tests-statefulset-t6xwc, resource: bindings, ignored listing per whitelist May 2 12:27:17.348: INFO: namespace e2e-tests-statefulset-t6xwc deletion completed in 6.113906227s • [SLOW TEST:88.353 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 2 12:27:17.348: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap e2e-tests-configmap-9vqz6/configmap-test-42df4ae3-8c70-11ea-8045-0242ac110017 STEP: Creating a pod to test consume configMaps May 2 12:27:17.471: INFO: Waiting up to 5m0s for pod "pod-configmaps-42e16ac8-8c70-11ea-8045-0242ac110017" in namespace "e2e-tests-configmap-9vqz6" to be "success or failure" May 2 12:27:17.555: INFO: Pod "pod-configmaps-42e16ac8-8c70-11ea-8045-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 84.171434ms May 2 12:27:19.615: INFO: Pod "pod-configmaps-42e16ac8-8c70-11ea-8045-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.144161223s May 2 12:27:21.620: INFO: Pod "pod-configmaps-42e16ac8-8c70-11ea-8045-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.148465242s STEP: Saw pod success May 2 12:27:21.620: INFO: Pod "pod-configmaps-42e16ac8-8c70-11ea-8045-0242ac110017" satisfied condition "success or failure" May 2 12:27:21.622: INFO: Trying to get logs from node hunter-worker2 pod pod-configmaps-42e16ac8-8c70-11ea-8045-0242ac110017 container env-test: STEP: delete the pod May 2 12:27:21.638: INFO: Waiting for pod pod-configmaps-42e16ac8-8c70-11ea-8045-0242ac110017 to disappear May 2 12:27:21.642: INFO: Pod pod-configmaps-42e16ac8-8c70-11ea-8045-0242ac110017 no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 2 12:27:21.642: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-9vqz6" for this suite. May 2 12:27:27.659: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 12:27:27.687: INFO: namespace: e2e-tests-configmap-9vqz6, resource: bindings, ignored listing per whitelist May 2 12:27:27.732: INFO: namespace e2e-tests-configmap-9vqz6 deletion completed in 6.086983636s • [SLOW TEST:10.385 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-auth] ServiceAccounts should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 2 12:27:27.733: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: getting the auto-created API token STEP: Creating a pod to test consume service account token May 2 12:27:28.333: INFO: Waiting up to 5m0s for pod "pod-service-account-495cc949-8c70-11ea-8045-0242ac110017-9smql" in namespace "e2e-tests-svcaccounts-lqldf" to be "success or failure" May 2 12:27:28.336: INFO: Pod "pod-service-account-495cc949-8c70-11ea-8045-0242ac110017-9smql": Phase="Pending", Reason="", readiness=false. Elapsed: 2.707156ms May 2 12:27:30.340: INFO: Pod "pod-service-account-495cc949-8c70-11ea-8045-0242ac110017-9smql": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006973974s May 2 12:27:32.344: INFO: Pod "pod-service-account-495cc949-8c70-11ea-8045-0242ac110017-9smql": Phase="Pending", Reason="", readiness=false. Elapsed: 4.01085159s May 2 12:27:34.348: INFO: Pod "pod-service-account-495cc949-8c70-11ea-8045-0242ac110017-9smql": Phase="Pending", Reason="", readiness=false. Elapsed: 6.014796253s May 2 12:27:36.352: INFO: Pod "pod-service-account-495cc949-8c70-11ea-8045-0242ac110017-9smql": Phase="Running", Reason="", readiness=false. Elapsed: 8.018792253s May 2 12:27:38.356: INFO: Pod "pod-service-account-495cc949-8c70-11ea-8045-0242ac110017-9smql": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.02278636s STEP: Saw pod success May 2 12:27:38.356: INFO: Pod "pod-service-account-495cc949-8c70-11ea-8045-0242ac110017-9smql" satisfied condition "success or failure" May 2 12:27:38.359: INFO: Trying to get logs from node hunter-worker2 pod pod-service-account-495cc949-8c70-11ea-8045-0242ac110017-9smql container token-test: STEP: delete the pod May 2 12:27:38.407: INFO: Waiting for pod pod-service-account-495cc949-8c70-11ea-8045-0242ac110017-9smql to disappear May 2 12:27:38.422: INFO: Pod pod-service-account-495cc949-8c70-11ea-8045-0242ac110017-9smql no longer exists STEP: Creating a pod to test consume service account root CA May 2 12:27:38.425: INFO: Waiting up to 5m0s for pod "pod-service-account-495cc949-8c70-11ea-8045-0242ac110017-vvrvj" in namespace "e2e-tests-svcaccounts-lqldf" to be "success or failure" May 2 12:27:38.428: INFO: Pod "pod-service-account-495cc949-8c70-11ea-8045-0242ac110017-vvrvj": Phase="Pending", Reason="", readiness=false. Elapsed: 2.382589ms May 2 12:27:40.431: INFO: Pod "pod-service-account-495cc949-8c70-11ea-8045-0242ac110017-vvrvj": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005875077s May 2 12:27:42.436: INFO: Pod "pod-service-account-495cc949-8c70-11ea-8045-0242ac110017-vvrvj": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010216619s May 2 12:27:44.439: INFO: Pod "pod-service-account-495cc949-8c70-11ea-8045-0242ac110017-vvrvj": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.013853706s STEP: Saw pod success May 2 12:27:44.439: INFO: Pod "pod-service-account-495cc949-8c70-11ea-8045-0242ac110017-vvrvj" satisfied condition "success or failure" May 2 12:27:44.442: INFO: Trying to get logs from node hunter-worker2 pod pod-service-account-495cc949-8c70-11ea-8045-0242ac110017-vvrvj container root-ca-test: STEP: delete the pod May 2 12:27:44.515: INFO: Waiting for pod pod-service-account-495cc949-8c70-11ea-8045-0242ac110017-vvrvj to disappear May 2 12:27:44.579: INFO: Pod pod-service-account-495cc949-8c70-11ea-8045-0242ac110017-vvrvj no longer exists STEP: Creating a pod to test consume service account namespace May 2 12:27:44.583: INFO: Waiting up to 5m0s for pod "pod-service-account-495cc949-8c70-11ea-8045-0242ac110017-h662w" in namespace "e2e-tests-svcaccounts-lqldf" to be "success or failure" May 2 12:27:44.595: INFO: Pod "pod-service-account-495cc949-8c70-11ea-8045-0242ac110017-h662w": Phase="Pending", Reason="", readiness=false. Elapsed: 11.68774ms May 2 12:27:46.599: INFO: Pod "pod-service-account-495cc949-8c70-11ea-8045-0242ac110017-h662w": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015173598s May 2 12:27:48.603: INFO: Pod "pod-service-account-495cc949-8c70-11ea-8045-0242ac110017-h662w": Phase="Pending", Reason="", readiness=false. Elapsed: 4.01923385s May 2 12:27:50.606: INFO: Pod "pod-service-account-495cc949-8c70-11ea-8045-0242ac110017-h662w": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.022521208s STEP: Saw pod success May 2 12:27:50.606: INFO: Pod "pod-service-account-495cc949-8c70-11ea-8045-0242ac110017-h662w" satisfied condition "success or failure" May 2 12:27:50.608: INFO: Trying to get logs from node hunter-worker2 pod pod-service-account-495cc949-8c70-11ea-8045-0242ac110017-h662w container namespace-test: STEP: delete the pod May 2 12:27:50.673: INFO: Waiting for pod pod-service-account-495cc949-8c70-11ea-8045-0242ac110017-h662w to disappear May 2 12:27:50.684: INFO: Pod pod-service-account-495cc949-8c70-11ea-8045-0242ac110017-h662w no longer exists [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 2 12:27:50.684: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-svcaccounts-lqldf" for this suite. May 2 12:27:56.700: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 12:27:56.770: INFO: namespace: e2e-tests-svcaccounts-lqldf, resource: bindings, ignored listing per whitelist May 2 12:27:56.788: INFO: namespace e2e-tests-svcaccounts-lqldf deletion completed in 6.0997621s • [SLOW TEST:29.055 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:22 should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 2 12:27:56.788: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43 [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod May 2 12:27:56.866: INFO: PodSpec: initContainers in spec.initContainers May 2 12:28:46.959: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-5a5fb226-8c70-11ea-8045-0242ac110017", GenerateName:"", Namespace:"e2e-tests-init-container-qhdlr", SelfLink:"/api/v1/namespaces/e2e-tests-init-container-qhdlr/pods/pod-init-5a5fb226-8c70-11ea-8045-0242ac110017", UID:"5a623e0d-8c70-11ea-99e8-0242ac110002", ResourceVersion:"8353114", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63724019276, loc:(*time.Location)(0x7950ac0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"866829073"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-tg5s7", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc001ba4c40), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-tg5s7", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-tg5s7", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-tg5s7", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc001f31c38), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"hunter-worker", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0019d0fc0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc001f31cc0)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc001f31ce0)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc001f31ce8), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc001f31cec)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724019277, loc:(*time.Location)(0x7950ac0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724019277, loc:(*time.Location)(0x7950ac0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724019277, loc:(*time.Location)(0x7950ac0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724019276, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.17.0.3", PodIP:"10.244.1.38", StartTime:(*v1.Time)(0xc00120a0c0), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc0015c83f0)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc0015c8460)}, Ready:false, RestartCount:3, Image:"docker.io/library/busybox:1.29", ImageID:"docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"containerd://14782c11891ff021d7177a364b7662377101e60c57054f2b2cbde2a25a3fdec6"}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc00120a100), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:""}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc00120a0e0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:""}}, QOSClass:"Guaranteed"}} [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 2 12:28:46.959: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-init-container-qhdlr" for this suite. May 2 12:29:08.983: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 12:29:09.027: INFO: namespace: e2e-tests-init-container-qhdlr, resource: bindings, ignored listing per whitelist May 2 12:29:09.060: INFO: namespace e2e-tests-init-container-qhdlr deletion completed in 22.09704652s • [SLOW TEST:72.272 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 2 12:29:09.060: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin May 2 12:29:09.155: INFO: Waiting up to 5m0s for pod "downwardapi-volume-857517e2-8c70-11ea-8045-0242ac110017" in namespace "e2e-tests-projected-cdg2s" to be "success or failure" May 2 12:29:09.203: INFO: Pod "downwardapi-volume-857517e2-8c70-11ea-8045-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 48.353024ms May 2 12:29:11.207: INFO: Pod "downwardapi-volume-857517e2-8c70-11ea-8045-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.051641352s May 2 12:29:13.210: INFO: Pod "downwardapi-volume-857517e2-8c70-11ea-8045-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.054947077s STEP: Saw pod success May 2 12:29:13.210: INFO: Pod "downwardapi-volume-857517e2-8c70-11ea-8045-0242ac110017" satisfied condition "success or failure" May 2 12:29:13.212: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-857517e2-8c70-11ea-8045-0242ac110017 container client-container: STEP: delete the pod May 2 12:29:13.296: INFO: Waiting for pod downwardapi-volume-857517e2-8c70-11ea-8045-0242ac110017 to disappear May 2 12:29:13.308: INFO: Pod downwardapi-volume-857517e2-8c70-11ea-8045-0242ac110017 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 2 12:29:13.308: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-cdg2s" for this suite. May 2 12:29:19.354: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 12:29:19.396: INFO: namespace: e2e-tests-projected-cdg2s, resource: bindings, ignored listing per whitelist May 2 12:29:19.424: INFO: namespace e2e-tests-projected-cdg2s deletion completed in 6.113010664s • [SLOW TEST:10.364 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 2 12:29:19.424: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a watch on configmaps with a certain label STEP: creating a new configmap STEP: modifying the configmap once STEP: changing the label value of the configmap STEP: Expecting to observe a delete notification for the watched object May 2 12:29:19.552: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-lk6kv,SelfLink:/api/v1/namespaces/e2e-tests-watch-lk6kv/configmaps/e2e-watch-test-label-changed,UID:8ba4eb6a-8c70-11ea-99e8-0242ac110002,ResourceVersion:8353222,Generation:0,CreationTimestamp:2020-05-02 12:29:19 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} May 2 12:29:19.553: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-lk6kv,SelfLink:/api/v1/namespaces/e2e-tests-watch-lk6kv/configmaps/e2e-watch-test-label-changed,UID:8ba4eb6a-8c70-11ea-99e8-0242ac110002,ResourceVersion:8353223,Generation:0,CreationTimestamp:2020-05-02 12:29:19 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} May 2 12:29:19.553: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-lk6kv,SelfLink:/api/v1/namespaces/e2e-tests-watch-lk6kv/configmaps/e2e-watch-test-label-changed,UID:8ba4eb6a-8c70-11ea-99e8-0242ac110002,ResourceVersion:8353224,Generation:0,CreationTimestamp:2020-05-02 12:29:19 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements STEP: changing the label value of the configmap back STEP: modifying the configmap a third time STEP: deleting the configmap STEP: Expecting to observe an add notification for the watched object when the label value was restored May 2 12:29:29.581: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-lk6kv,SelfLink:/api/v1/namespaces/e2e-tests-watch-lk6kv/configmaps/e2e-watch-test-label-changed,UID:8ba4eb6a-8c70-11ea-99e8-0242ac110002,ResourceVersion:8353246,Generation:0,CreationTimestamp:2020-05-02 12:29:19 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} May 2 12:29:29.582: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-lk6kv,SelfLink:/api/v1/namespaces/e2e-tests-watch-lk6kv/configmaps/e2e-watch-test-label-changed,UID:8ba4eb6a-8c70-11ea-99e8-0242ac110002,ResourceVersion:8353247,Generation:0,CreationTimestamp:2020-05-02 12:29:19 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} May 2 12:29:29.582: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-lk6kv,SelfLink:/api/v1/namespaces/e2e-tests-watch-lk6kv/configmaps/e2e-watch-test-label-changed,UID:8ba4eb6a-8c70-11ea-99e8-0242ac110002,ResourceVersion:8353248,Generation:0,CreationTimestamp:2020-05-02 12:29:19 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 2 12:29:29.582: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-watch-lk6kv" for this suite. May 2 12:29:35.597: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 12:29:35.638: INFO: namespace: e2e-tests-watch-lk6kv, resource: bindings, ignored listing per whitelist May 2 12:29:35.670: INFO: namespace e2e-tests-watch-lk6kv deletion completed in 6.083362467s • [SLOW TEST:16.246 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 2 12:29:35.670: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward api env vars May 2 12:29:35.786: INFO: Waiting up to 5m0s for pod "downward-api-9554eeb0-8c70-11ea-8045-0242ac110017" in namespace "e2e-tests-downward-api-9gzp5" to be "success or failure" May 2 12:29:35.790: INFO: Pod "downward-api-9554eeb0-8c70-11ea-8045-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 3.1877ms May 2 12:29:37.794: INFO: Pod "downward-api-9554eeb0-8c70-11ea-8045-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007360891s May 2 12:29:39.798: INFO: Pod "downward-api-9554eeb0-8c70-11ea-8045-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012005511s STEP: Saw pod success May 2 12:29:39.798: INFO: Pod "downward-api-9554eeb0-8c70-11ea-8045-0242ac110017" satisfied condition "success or failure" May 2 12:29:39.802: INFO: Trying to get logs from node hunter-worker2 pod downward-api-9554eeb0-8c70-11ea-8045-0242ac110017 container dapi-container: STEP: delete the pod May 2 12:29:39.825: INFO: Waiting for pod downward-api-9554eeb0-8c70-11ea-8045-0242ac110017 to disappear May 2 12:29:39.830: INFO: Pod downward-api-9554eeb0-8c70-11ea-8045-0242ac110017 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 2 12:29:39.830: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-9gzp5" for this suite. May 2 12:29:45.886: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 12:29:45.900: INFO: namespace: e2e-tests-downward-api-9gzp5, resource: bindings, ignored listing per whitelist May 2 12:29:45.965: INFO: namespace e2e-tests-downward-api-9gzp5 deletion completed in 6.133358462s • [SLOW TEST:10.295 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38 should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 2 12:29:45.965: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test substitution in container's args May 2 12:29:46.132: INFO: Waiting up to 5m0s for pod "var-expansion-9b7cf9d0-8c70-11ea-8045-0242ac110017" in namespace "e2e-tests-var-expansion-xb7z9" to be "success or failure" May 2 12:29:46.141: INFO: Pod "var-expansion-9b7cf9d0-8c70-11ea-8045-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 9.903152ms May 2 12:29:48.144: INFO: Pod "var-expansion-9b7cf9d0-8c70-11ea-8045-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012791882s May 2 12:29:50.148: INFO: Pod "var-expansion-9b7cf9d0-8c70-11ea-8045-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.016621204s STEP: Saw pod success May 2 12:29:50.148: INFO: Pod "var-expansion-9b7cf9d0-8c70-11ea-8045-0242ac110017" satisfied condition "success or failure" May 2 12:29:50.151: INFO: Trying to get logs from node hunter-worker2 pod var-expansion-9b7cf9d0-8c70-11ea-8045-0242ac110017 container dapi-container: STEP: delete the pod May 2 12:29:50.167: INFO: Waiting for pod var-expansion-9b7cf9d0-8c70-11ea-8045-0242ac110017 to disappear May 2 12:29:50.246: INFO: Pod var-expansion-9b7cf9d0-8c70-11ea-8045-0242ac110017 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 2 12:29:50.246: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-var-expansion-xb7z9" for this suite. May 2 12:29:56.267: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 12:29:56.320: INFO: namespace: e2e-tests-var-expansion-xb7z9, resource: bindings, ignored listing per whitelist May 2 12:29:56.344: INFO: namespace e2e-tests-var-expansion-xb7z9 deletion completed in 6.094693999s • [SLOW TEST:10.379 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 2 12:29:56.345: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 2 12:29:56.539: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubelet-test-f87qt" for this suite. May 2 12:30:02.603: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 12:30:02.628: INFO: namespace: e2e-tests-kubelet-test-f87qt, resource: bindings, ignored listing per whitelist May 2 12:30:02.725: INFO: namespace e2e-tests-kubelet-test-f87qt deletion completed in 6.159441853s • [SLOW TEST:6.380 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78 should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl version should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 2 12:30:02.726: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 May 2 12:30:02.801: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version' May 2 12:30:02.942: INFO: stderr: "" May 2 12:30:02.942: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"13\", GitVersion:\"v1.13.12\", GitCommit:\"a8b52209ee172232b6db7a6e0ce2adc77458829f\", GitTreeState:\"clean\", BuildDate:\"2020-05-01T17:08:34Z\", GoVersion:\"go1.11.13\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"13\", GitVersion:\"v1.13.12\", GitCommit:\"a8b52209ee172232b6db7a6e0ce2adc77458829f\", GitTreeState:\"clean\", BuildDate:\"2020-01-14T01:07:14Z\", GoVersion:\"go1.11.13\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 2 12:30:02.942: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-wr9vq" for this suite. May 2 12:30:08.968: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 12:30:08.985: INFO: namespace: e2e-tests-kubectl-wr9vq, resource: bindings, ignored listing per whitelist May 2 12:30:09.046: INFO: namespace e2e-tests-kubectl-wr9vq deletion completed in 6.098218595s • [SLOW TEST:6.320 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl version /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 2 12:30:09.046: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-27xdf A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.e2e-tests-dns-27xdf;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-27xdf A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.e2e-tests-dns-27xdf;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-27xdf.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.e2e-tests-dns-27xdf.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-27xdf.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.e2e-tests-dns-27xdf.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-27xdf.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-27xdf.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-27xdf.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.e2e-tests-dns-27xdf.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-27xdf.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.e2e-tests-dns-27xdf.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-27xdf.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.e2e-tests-dns-27xdf.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-27xdf.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 14.40.107.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.107.40.14_udp@PTR;check="$$(dig +tcp +noall +answer +search 14.40.107.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.107.40.14_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-27xdf A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.e2e-tests-dns-27xdf;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-27xdf A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.e2e-tests-dns-27xdf;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-27xdf.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.e2e-tests-dns-27xdf.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-27xdf.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.e2e-tests-dns-27xdf.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-27xdf.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-27xdf.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-27xdf.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-27xdf.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-27xdf.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.e2e-tests-dns-27xdf.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-27xdf.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.e2e-tests-dns-27xdf.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-27xdf.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 14.40.107.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.107.40.14_udp@PTR;check="$$(dig +tcp +noall +answer +search 14.40.107.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.107.40.14_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 2 12:30:15.319: INFO: Unable to read wheezy_udp@dns-test-service from pod e2e-tests-dns-27xdf/dns-test-a940038e-8c70-11ea-8045-0242ac110017: the server could not find the requested resource (get pods dns-test-a940038e-8c70-11ea-8045-0242ac110017) May 2 12:30:15.327: INFO: Unable to read wheezy_tcp@dns-test-service.e2e-tests-dns-27xdf from pod e2e-tests-dns-27xdf/dns-test-a940038e-8c70-11ea-8045-0242ac110017: the server could not find the requested resource (get pods dns-test-a940038e-8c70-11ea-8045-0242ac110017) May 2 12:30:15.332: INFO: Unable to read wheezy_tcp@dns-test-service.e2e-tests-dns-27xdf.svc from pod e2e-tests-dns-27xdf/dns-test-a940038e-8c70-11ea-8045-0242ac110017: the server could not find the requested resource (get pods dns-test-a940038e-8c70-11ea-8045-0242ac110017) May 2 12:30:15.335: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-27xdf.svc from pod e2e-tests-dns-27xdf/dns-test-a940038e-8c70-11ea-8045-0242ac110017: the server could not find the requested resource (get pods dns-test-a940038e-8c70-11ea-8045-0242ac110017) May 2 12:30:15.358: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-27xdf/dns-test-a940038e-8c70-11ea-8045-0242ac110017: the server could not find the requested resource (get pods dns-test-a940038e-8c70-11ea-8045-0242ac110017) May 2 12:30:15.360: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-27xdf/dns-test-a940038e-8c70-11ea-8045-0242ac110017: the server could not find the requested resource (get pods dns-test-a940038e-8c70-11ea-8045-0242ac110017) May 2 12:30:15.364: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-27xdf from pod e2e-tests-dns-27xdf/dns-test-a940038e-8c70-11ea-8045-0242ac110017: the server could not find the requested resource (get pods dns-test-a940038e-8c70-11ea-8045-0242ac110017) May 2 12:30:15.367: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-27xdf from pod e2e-tests-dns-27xdf/dns-test-a940038e-8c70-11ea-8045-0242ac110017: the server could not find the requested resource (get pods dns-test-a940038e-8c70-11ea-8045-0242ac110017) May 2 12:30:15.370: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-27xdf.svc from pod e2e-tests-dns-27xdf/dns-test-a940038e-8c70-11ea-8045-0242ac110017: the server could not find the requested resource (get pods dns-test-a940038e-8c70-11ea-8045-0242ac110017) May 2 12:30:15.373: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-27xdf.svc from pod e2e-tests-dns-27xdf/dns-test-a940038e-8c70-11ea-8045-0242ac110017: the server could not find the requested resource (get pods dns-test-a940038e-8c70-11ea-8045-0242ac110017) May 2 12:30:15.376: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-27xdf.svc from pod e2e-tests-dns-27xdf/dns-test-a940038e-8c70-11ea-8045-0242ac110017: the server could not find the requested resource (get pods dns-test-a940038e-8c70-11ea-8045-0242ac110017) May 2 12:30:15.379: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-27xdf.svc from pod e2e-tests-dns-27xdf/dns-test-a940038e-8c70-11ea-8045-0242ac110017: the server could not find the requested resource (get pods dns-test-a940038e-8c70-11ea-8045-0242ac110017) May 2 12:30:15.402: INFO: Lookups using e2e-tests-dns-27xdf/dns-test-a940038e-8c70-11ea-8045-0242ac110017 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service.e2e-tests-dns-27xdf wheezy_tcp@dns-test-service.e2e-tests-dns-27xdf.svc wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-27xdf.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-27xdf jessie_tcp@dns-test-service.e2e-tests-dns-27xdf jessie_udp@dns-test-service.e2e-tests-dns-27xdf.svc jessie_tcp@dns-test-service.e2e-tests-dns-27xdf.svc jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-27xdf.svc jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-27xdf.svc] May 2 12:30:20.407: INFO: Unable to read wheezy_udp@dns-test-service from pod e2e-tests-dns-27xdf/dns-test-a940038e-8c70-11ea-8045-0242ac110017: the server could not find the requested resource (get pods dns-test-a940038e-8c70-11ea-8045-0242ac110017) May 2 12:30:20.416: INFO: Unable to read wheezy_tcp@dns-test-service.e2e-tests-dns-27xdf from pod e2e-tests-dns-27xdf/dns-test-a940038e-8c70-11ea-8045-0242ac110017: the server could not find the requested resource (get pods dns-test-a940038e-8c70-11ea-8045-0242ac110017) May 2 12:30:20.421: INFO: Unable to read wheezy_tcp@dns-test-service.e2e-tests-dns-27xdf.svc from pod e2e-tests-dns-27xdf/dns-test-a940038e-8c70-11ea-8045-0242ac110017: the server could not find the requested resource (get pods dns-test-a940038e-8c70-11ea-8045-0242ac110017) May 2 12:30:20.423: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-27xdf.svc from pod e2e-tests-dns-27xdf/dns-test-a940038e-8c70-11ea-8045-0242ac110017: the server could not find the requested resource (get pods dns-test-a940038e-8c70-11ea-8045-0242ac110017) May 2 12:30:20.445: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-27xdf/dns-test-a940038e-8c70-11ea-8045-0242ac110017: the server could not find the requested resource (get pods dns-test-a940038e-8c70-11ea-8045-0242ac110017) May 2 12:30:20.448: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-27xdf/dns-test-a940038e-8c70-11ea-8045-0242ac110017: the server could not find the requested resource (get pods dns-test-a940038e-8c70-11ea-8045-0242ac110017) May 2 12:30:20.450: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-27xdf from pod e2e-tests-dns-27xdf/dns-test-a940038e-8c70-11ea-8045-0242ac110017: the server could not find the requested resource (get pods dns-test-a940038e-8c70-11ea-8045-0242ac110017) May 2 12:30:20.453: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-27xdf from pod e2e-tests-dns-27xdf/dns-test-a940038e-8c70-11ea-8045-0242ac110017: the server could not find the requested resource (get pods dns-test-a940038e-8c70-11ea-8045-0242ac110017) May 2 12:30:20.456: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-27xdf.svc from pod e2e-tests-dns-27xdf/dns-test-a940038e-8c70-11ea-8045-0242ac110017: the server could not find the requested resource (get pods dns-test-a940038e-8c70-11ea-8045-0242ac110017) May 2 12:30:20.458: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-27xdf.svc from pod e2e-tests-dns-27xdf/dns-test-a940038e-8c70-11ea-8045-0242ac110017: the server could not find the requested resource (get pods dns-test-a940038e-8c70-11ea-8045-0242ac110017) May 2 12:30:20.461: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-27xdf.svc from pod e2e-tests-dns-27xdf/dns-test-a940038e-8c70-11ea-8045-0242ac110017: the server could not find the requested resource (get pods dns-test-a940038e-8c70-11ea-8045-0242ac110017) May 2 12:30:20.463: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-27xdf.svc from pod e2e-tests-dns-27xdf/dns-test-a940038e-8c70-11ea-8045-0242ac110017: the server could not find the requested resource (get pods dns-test-a940038e-8c70-11ea-8045-0242ac110017) May 2 12:30:20.480: INFO: Lookups using e2e-tests-dns-27xdf/dns-test-a940038e-8c70-11ea-8045-0242ac110017 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service.e2e-tests-dns-27xdf wheezy_tcp@dns-test-service.e2e-tests-dns-27xdf.svc wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-27xdf.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-27xdf jessie_tcp@dns-test-service.e2e-tests-dns-27xdf jessie_udp@dns-test-service.e2e-tests-dns-27xdf.svc jessie_tcp@dns-test-service.e2e-tests-dns-27xdf.svc jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-27xdf.svc jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-27xdf.svc] May 2 12:30:25.407: INFO: Unable to read wheezy_udp@dns-test-service from pod e2e-tests-dns-27xdf/dns-test-a940038e-8c70-11ea-8045-0242ac110017: the server could not find the requested resource (get pods dns-test-a940038e-8c70-11ea-8045-0242ac110017) May 2 12:30:25.418: INFO: Unable to read wheezy_tcp@dns-test-service.e2e-tests-dns-27xdf from pod e2e-tests-dns-27xdf/dns-test-a940038e-8c70-11ea-8045-0242ac110017: the server could not find the requested resource (get pods dns-test-a940038e-8c70-11ea-8045-0242ac110017) May 2 12:30:25.424: INFO: Unable to read wheezy_tcp@dns-test-service.e2e-tests-dns-27xdf.svc from pod e2e-tests-dns-27xdf/dns-test-a940038e-8c70-11ea-8045-0242ac110017: the server could not find the requested resource (get pods dns-test-a940038e-8c70-11ea-8045-0242ac110017) May 2 12:30:25.427: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-27xdf.svc from pod e2e-tests-dns-27xdf/dns-test-a940038e-8c70-11ea-8045-0242ac110017: the server could not find the requested resource (get pods dns-test-a940038e-8c70-11ea-8045-0242ac110017) May 2 12:30:25.454: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-27xdf/dns-test-a940038e-8c70-11ea-8045-0242ac110017: the server could not find the requested resource (get pods dns-test-a940038e-8c70-11ea-8045-0242ac110017) May 2 12:30:25.457: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-27xdf/dns-test-a940038e-8c70-11ea-8045-0242ac110017: the server could not find the requested resource (get pods dns-test-a940038e-8c70-11ea-8045-0242ac110017) May 2 12:30:25.465: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-27xdf from pod e2e-tests-dns-27xdf/dns-test-a940038e-8c70-11ea-8045-0242ac110017: the server could not find the requested resource (get pods dns-test-a940038e-8c70-11ea-8045-0242ac110017) May 2 12:30:25.469: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-27xdf from pod e2e-tests-dns-27xdf/dns-test-a940038e-8c70-11ea-8045-0242ac110017: the server could not find the requested resource (get pods dns-test-a940038e-8c70-11ea-8045-0242ac110017) May 2 12:30:25.477: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-27xdf.svc from pod e2e-tests-dns-27xdf/dns-test-a940038e-8c70-11ea-8045-0242ac110017: the server could not find the requested resource (get pods dns-test-a940038e-8c70-11ea-8045-0242ac110017) May 2 12:30:25.480: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-27xdf.svc from pod e2e-tests-dns-27xdf/dns-test-a940038e-8c70-11ea-8045-0242ac110017: the server could not find the requested resource (get pods dns-test-a940038e-8c70-11ea-8045-0242ac110017) May 2 12:30:25.483: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-27xdf.svc from pod e2e-tests-dns-27xdf/dns-test-a940038e-8c70-11ea-8045-0242ac110017: the server could not find the requested resource (get pods dns-test-a940038e-8c70-11ea-8045-0242ac110017) May 2 12:30:25.485: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-27xdf.svc from pod e2e-tests-dns-27xdf/dns-test-a940038e-8c70-11ea-8045-0242ac110017: the server could not find the requested resource (get pods dns-test-a940038e-8c70-11ea-8045-0242ac110017) May 2 12:30:25.499: INFO: Lookups using e2e-tests-dns-27xdf/dns-test-a940038e-8c70-11ea-8045-0242ac110017 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service.e2e-tests-dns-27xdf wheezy_tcp@dns-test-service.e2e-tests-dns-27xdf.svc wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-27xdf.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-27xdf jessie_tcp@dns-test-service.e2e-tests-dns-27xdf jessie_udp@dns-test-service.e2e-tests-dns-27xdf.svc jessie_tcp@dns-test-service.e2e-tests-dns-27xdf.svc jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-27xdf.svc jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-27xdf.svc] May 2 12:30:30.407: INFO: Unable to read wheezy_udp@dns-test-service from pod e2e-tests-dns-27xdf/dns-test-a940038e-8c70-11ea-8045-0242ac110017: the server could not find the requested resource (get pods dns-test-a940038e-8c70-11ea-8045-0242ac110017) May 2 12:30:30.419: INFO: Unable to read wheezy_tcp@dns-test-service.e2e-tests-dns-27xdf from pod e2e-tests-dns-27xdf/dns-test-a940038e-8c70-11ea-8045-0242ac110017: the server could not find the requested resource (get pods dns-test-a940038e-8c70-11ea-8045-0242ac110017) May 2 12:30:30.425: INFO: Unable to read wheezy_tcp@dns-test-service.e2e-tests-dns-27xdf.svc from pod e2e-tests-dns-27xdf/dns-test-a940038e-8c70-11ea-8045-0242ac110017: the server could not find the requested resource (get pods dns-test-a940038e-8c70-11ea-8045-0242ac110017) May 2 12:30:30.428: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-27xdf.svc from pod e2e-tests-dns-27xdf/dns-test-a940038e-8c70-11ea-8045-0242ac110017: the server could not find the requested resource (get pods dns-test-a940038e-8c70-11ea-8045-0242ac110017) May 2 12:30:30.452: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-27xdf/dns-test-a940038e-8c70-11ea-8045-0242ac110017: the server could not find the requested resource (get pods dns-test-a940038e-8c70-11ea-8045-0242ac110017) May 2 12:30:30.455: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-27xdf/dns-test-a940038e-8c70-11ea-8045-0242ac110017: the server could not find the requested resource (get pods dns-test-a940038e-8c70-11ea-8045-0242ac110017) May 2 12:30:30.458: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-27xdf from pod e2e-tests-dns-27xdf/dns-test-a940038e-8c70-11ea-8045-0242ac110017: the server could not find the requested resource (get pods dns-test-a940038e-8c70-11ea-8045-0242ac110017) May 2 12:30:30.461: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-27xdf from pod e2e-tests-dns-27xdf/dns-test-a940038e-8c70-11ea-8045-0242ac110017: the server could not find the requested resource (get pods dns-test-a940038e-8c70-11ea-8045-0242ac110017) May 2 12:30:30.464: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-27xdf.svc from pod e2e-tests-dns-27xdf/dns-test-a940038e-8c70-11ea-8045-0242ac110017: the server could not find the requested resource (get pods dns-test-a940038e-8c70-11ea-8045-0242ac110017) May 2 12:30:30.467: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-27xdf.svc from pod e2e-tests-dns-27xdf/dns-test-a940038e-8c70-11ea-8045-0242ac110017: the server could not find the requested resource (get pods dns-test-a940038e-8c70-11ea-8045-0242ac110017) May 2 12:30:30.470: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-27xdf.svc from pod e2e-tests-dns-27xdf/dns-test-a940038e-8c70-11ea-8045-0242ac110017: the server could not find the requested resource (get pods dns-test-a940038e-8c70-11ea-8045-0242ac110017) May 2 12:30:30.473: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-27xdf.svc from pod e2e-tests-dns-27xdf/dns-test-a940038e-8c70-11ea-8045-0242ac110017: the server could not find the requested resource (get pods dns-test-a940038e-8c70-11ea-8045-0242ac110017) May 2 12:30:30.491: INFO: Lookups using e2e-tests-dns-27xdf/dns-test-a940038e-8c70-11ea-8045-0242ac110017 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service.e2e-tests-dns-27xdf wheezy_tcp@dns-test-service.e2e-tests-dns-27xdf.svc wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-27xdf.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-27xdf jessie_tcp@dns-test-service.e2e-tests-dns-27xdf jessie_udp@dns-test-service.e2e-tests-dns-27xdf.svc jessie_tcp@dns-test-service.e2e-tests-dns-27xdf.svc jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-27xdf.svc jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-27xdf.svc] May 2 12:30:35.407: INFO: Unable to read wheezy_udp@dns-test-service from pod e2e-tests-dns-27xdf/dns-test-a940038e-8c70-11ea-8045-0242ac110017: the server could not find the requested resource (get pods dns-test-a940038e-8c70-11ea-8045-0242ac110017) May 2 12:30:35.417: INFO: Unable to read wheezy_tcp@dns-test-service.e2e-tests-dns-27xdf from pod e2e-tests-dns-27xdf/dns-test-a940038e-8c70-11ea-8045-0242ac110017: the server could not find the requested resource (get pods dns-test-a940038e-8c70-11ea-8045-0242ac110017) May 2 12:30:35.423: INFO: Unable to read wheezy_tcp@dns-test-service.e2e-tests-dns-27xdf.svc from pod e2e-tests-dns-27xdf/dns-test-a940038e-8c70-11ea-8045-0242ac110017: the server could not find the requested resource (get pods dns-test-a940038e-8c70-11ea-8045-0242ac110017) May 2 12:30:35.426: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-27xdf.svc from pod e2e-tests-dns-27xdf/dns-test-a940038e-8c70-11ea-8045-0242ac110017: the server could not find the requested resource (get pods dns-test-a940038e-8c70-11ea-8045-0242ac110017) May 2 12:30:35.450: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-27xdf/dns-test-a940038e-8c70-11ea-8045-0242ac110017: the server could not find the requested resource (get pods dns-test-a940038e-8c70-11ea-8045-0242ac110017) May 2 12:30:35.452: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-27xdf/dns-test-a940038e-8c70-11ea-8045-0242ac110017: the server could not find the requested resource (get pods dns-test-a940038e-8c70-11ea-8045-0242ac110017) May 2 12:30:35.455: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-27xdf from pod e2e-tests-dns-27xdf/dns-test-a940038e-8c70-11ea-8045-0242ac110017: the server could not find the requested resource (get pods dns-test-a940038e-8c70-11ea-8045-0242ac110017) May 2 12:30:35.459: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-27xdf from pod e2e-tests-dns-27xdf/dns-test-a940038e-8c70-11ea-8045-0242ac110017: the server could not find the requested resource (get pods dns-test-a940038e-8c70-11ea-8045-0242ac110017) May 2 12:30:35.462: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-27xdf.svc from pod e2e-tests-dns-27xdf/dns-test-a940038e-8c70-11ea-8045-0242ac110017: the server could not find the requested resource (get pods dns-test-a940038e-8c70-11ea-8045-0242ac110017) May 2 12:30:35.464: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-27xdf.svc from pod e2e-tests-dns-27xdf/dns-test-a940038e-8c70-11ea-8045-0242ac110017: the server could not find the requested resource (get pods dns-test-a940038e-8c70-11ea-8045-0242ac110017) May 2 12:30:35.467: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-27xdf.svc from pod e2e-tests-dns-27xdf/dns-test-a940038e-8c70-11ea-8045-0242ac110017: the server could not find the requested resource (get pods dns-test-a940038e-8c70-11ea-8045-0242ac110017) May 2 12:30:35.470: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-27xdf.svc from pod e2e-tests-dns-27xdf/dns-test-a940038e-8c70-11ea-8045-0242ac110017: the server could not find the requested resource (get pods dns-test-a940038e-8c70-11ea-8045-0242ac110017) May 2 12:30:35.491: INFO: Lookups using e2e-tests-dns-27xdf/dns-test-a940038e-8c70-11ea-8045-0242ac110017 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service.e2e-tests-dns-27xdf wheezy_tcp@dns-test-service.e2e-tests-dns-27xdf.svc wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-27xdf.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-27xdf jessie_tcp@dns-test-service.e2e-tests-dns-27xdf jessie_udp@dns-test-service.e2e-tests-dns-27xdf.svc jessie_tcp@dns-test-service.e2e-tests-dns-27xdf.svc jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-27xdf.svc jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-27xdf.svc] May 2 12:30:40.407: INFO: Unable to read wheezy_udp@dns-test-service from pod e2e-tests-dns-27xdf/dns-test-a940038e-8c70-11ea-8045-0242ac110017: the server could not find the requested resource (get pods dns-test-a940038e-8c70-11ea-8045-0242ac110017) May 2 12:30:40.434: INFO: Unable to read wheezy_tcp@dns-test-service.e2e-tests-dns-27xdf from pod e2e-tests-dns-27xdf/dns-test-a940038e-8c70-11ea-8045-0242ac110017: the server could not find the requested resource (get pods dns-test-a940038e-8c70-11ea-8045-0242ac110017) May 2 12:30:40.439: INFO: Unable to read wheezy_tcp@dns-test-service.e2e-tests-dns-27xdf.svc from pod e2e-tests-dns-27xdf/dns-test-a940038e-8c70-11ea-8045-0242ac110017: the server could not find the requested resource (get pods dns-test-a940038e-8c70-11ea-8045-0242ac110017) May 2 12:30:40.441: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-27xdf.svc from pod e2e-tests-dns-27xdf/dns-test-a940038e-8c70-11ea-8045-0242ac110017: the server could not find the requested resource (get pods dns-test-a940038e-8c70-11ea-8045-0242ac110017) May 2 12:30:40.464: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-27xdf/dns-test-a940038e-8c70-11ea-8045-0242ac110017: the server could not find the requested resource (get pods dns-test-a940038e-8c70-11ea-8045-0242ac110017) May 2 12:30:40.468: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-27xdf/dns-test-a940038e-8c70-11ea-8045-0242ac110017: the server could not find the requested resource (get pods dns-test-a940038e-8c70-11ea-8045-0242ac110017) May 2 12:30:40.470: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-27xdf from pod e2e-tests-dns-27xdf/dns-test-a940038e-8c70-11ea-8045-0242ac110017: the server could not find the requested resource (get pods dns-test-a940038e-8c70-11ea-8045-0242ac110017) May 2 12:30:40.473: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-27xdf from pod e2e-tests-dns-27xdf/dns-test-a940038e-8c70-11ea-8045-0242ac110017: the server could not find the requested resource (get pods dns-test-a940038e-8c70-11ea-8045-0242ac110017) May 2 12:30:40.476: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-27xdf.svc from pod e2e-tests-dns-27xdf/dns-test-a940038e-8c70-11ea-8045-0242ac110017: the server could not find the requested resource (get pods dns-test-a940038e-8c70-11ea-8045-0242ac110017) May 2 12:30:40.478: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-27xdf.svc from pod e2e-tests-dns-27xdf/dns-test-a940038e-8c70-11ea-8045-0242ac110017: the server could not find the requested resource (get pods dns-test-a940038e-8c70-11ea-8045-0242ac110017) May 2 12:30:40.481: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-27xdf.svc from pod e2e-tests-dns-27xdf/dns-test-a940038e-8c70-11ea-8045-0242ac110017: the server could not find the requested resource (get pods dns-test-a940038e-8c70-11ea-8045-0242ac110017) May 2 12:30:40.484: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-27xdf.svc from pod e2e-tests-dns-27xdf/dns-test-a940038e-8c70-11ea-8045-0242ac110017: the server could not find the requested resource (get pods dns-test-a940038e-8c70-11ea-8045-0242ac110017) May 2 12:30:40.503: INFO: Lookups using e2e-tests-dns-27xdf/dns-test-a940038e-8c70-11ea-8045-0242ac110017 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service.e2e-tests-dns-27xdf wheezy_tcp@dns-test-service.e2e-tests-dns-27xdf.svc wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-27xdf.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-27xdf jessie_tcp@dns-test-service.e2e-tests-dns-27xdf jessie_udp@dns-test-service.e2e-tests-dns-27xdf.svc jessie_tcp@dns-test-service.e2e-tests-dns-27xdf.svc jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-27xdf.svc jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-27xdf.svc] May 2 12:30:45.494: INFO: DNS probes using e2e-tests-dns-27xdf/dns-test-a940038e-8c70-11ea-8045-0242ac110017 succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 2 12:30:46.041: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-dns-27xdf" for this suite. May 2 12:30:52.203: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 12:30:52.230: INFO: namespace: e2e-tests-dns-27xdf, resource: bindings, ignored listing per whitelist May 2 12:30:52.284: INFO: namespace e2e-tests-dns-27xdf deletion completed in 6.107674357s • [SLOW TEST:43.238 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 2 12:30:52.286: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin May 2 12:30:52.444: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c30563e8-8c70-11ea-8045-0242ac110017" in namespace "e2e-tests-projected-pswk4" to be "success or failure" May 2 12:30:52.460: INFO: Pod "downwardapi-volume-c30563e8-8c70-11ea-8045-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 16.464584ms May 2 12:30:54.552: INFO: Pod "downwardapi-volume-c30563e8-8c70-11ea-8045-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.108623204s May 2 12:30:56.557: INFO: Pod "downwardapi-volume-c30563e8-8c70-11ea-8045-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.113205476s STEP: Saw pod success May 2 12:30:56.557: INFO: Pod "downwardapi-volume-c30563e8-8c70-11ea-8045-0242ac110017" satisfied condition "success or failure" May 2 12:30:56.560: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-c30563e8-8c70-11ea-8045-0242ac110017 container client-container: STEP: delete the pod May 2 12:30:56.576: INFO: Waiting for pod downwardapi-volume-c30563e8-8c70-11ea-8045-0242ac110017 to disappear May 2 12:30:56.580: INFO: Pod downwardapi-volume-c30563e8-8c70-11ea-8045-0242ac110017 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 2 12:30:56.580: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-pswk4" for this suite. May 2 12:31:02.595: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 12:31:02.622: INFO: namespace: e2e-tests-projected-pswk4, resource: bindings, ignored listing per whitelist May 2 12:31:02.668: INFO: namespace e2e-tests-projected-pswk4 deletion completed in 6.085085679s • [SLOW TEST:10.383 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 2 12:31:02.668: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should not write to root filesystem [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 2 12:31:06.821: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubelet-test-pvh44" for this suite. May 2 12:31:56.838: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 12:31:56.900: INFO: namespace: e2e-tests-kubelet-test-pvh44, resource: bindings, ignored listing per whitelist May 2 12:31:56.910: INFO: namespace e2e-tests-kubelet-test-pvh44 deletion completed in 50.084860529s • [SLOW TEST:54.241 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when scheduling a read only busybox container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:186 should not write to root filesystem [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 2 12:31:56.910: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0777 on tmpfs May 2 12:31:57.047: INFO: Waiting up to 5m0s for pod "pod-e9878741-8c70-11ea-8045-0242ac110017" in namespace "e2e-tests-emptydir-w4rx2" to be "success or failure" May 2 12:31:57.065: INFO: Pod "pod-e9878741-8c70-11ea-8045-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 18.391661ms May 2 12:31:59.070: INFO: Pod "pod-e9878741-8c70-11ea-8045-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023458209s May 2 12:32:01.074: INFO: Pod "pod-e9878741-8c70-11ea-8045-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.02740645s STEP: Saw pod success May 2 12:32:01.074: INFO: Pod "pod-e9878741-8c70-11ea-8045-0242ac110017" satisfied condition "success or failure" May 2 12:32:01.077: INFO: Trying to get logs from node hunter-worker2 pod pod-e9878741-8c70-11ea-8045-0242ac110017 container test-container: STEP: delete the pod May 2 12:32:01.174: INFO: Waiting for pod pod-e9878741-8c70-11ea-8045-0242ac110017 to disappear May 2 12:32:01.326: INFO: Pod pod-e9878741-8c70-11ea-8045-0242ac110017 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 2 12:32:01.326: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-w4rx2" for this suite. May 2 12:32:07.361: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 12:32:07.418: INFO: namespace: e2e-tests-emptydir-w4rx2, resource: bindings, ignored listing per whitelist May 2 12:32:07.421: INFO: namespace e2e-tests-emptydir-w4rx2 deletion completed in 6.091428566s • [SLOW TEST:10.511 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 2 12:32:07.422: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:295 [It] should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a replication controller May 2 12:32:07.486: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-msxfn' May 2 12:32:09.991: INFO: stderr: "" May 2 12:32:09.991: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. May 2 12:32:09.991: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-msxfn' May 2 12:32:10.106: INFO: stderr: "" May 2 12:32:10.106: INFO: stdout: "update-demo-nautilus-55hbg update-demo-nautilus-srfs5 " May 2 12:32:10.107: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-55hbg -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-msxfn' May 2 12:32:10.224: INFO: stderr: "" May 2 12:32:10.224: INFO: stdout: "" May 2 12:32:10.224: INFO: update-demo-nautilus-55hbg is created but not running May 2 12:32:15.224: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-msxfn' May 2 12:32:15.332: INFO: stderr: "" May 2 12:32:15.332: INFO: stdout: "update-demo-nautilus-55hbg update-demo-nautilus-srfs5 " May 2 12:32:15.332: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-55hbg -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-msxfn' May 2 12:32:15.433: INFO: stderr: "" May 2 12:32:15.433: INFO: stdout: "true" May 2 12:32:15.434: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-55hbg -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-msxfn' May 2 12:32:15.546: INFO: stderr: "" May 2 12:32:15.546: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 2 12:32:15.546: INFO: validating pod update-demo-nautilus-55hbg May 2 12:32:15.551: INFO: got data: { "image": "nautilus.jpg" } May 2 12:32:15.551: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 2 12:32:15.551: INFO: update-demo-nautilus-55hbg is verified up and running May 2 12:32:15.551: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-srfs5 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-msxfn' May 2 12:32:15.657: INFO: stderr: "" May 2 12:32:15.657: INFO: stdout: "true" May 2 12:32:15.657: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-srfs5 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-msxfn' May 2 12:32:15.755: INFO: stderr: "" May 2 12:32:15.755: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 2 12:32:15.755: INFO: validating pod update-demo-nautilus-srfs5 May 2 12:32:15.759: INFO: got data: { "image": "nautilus.jpg" } May 2 12:32:15.759: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 2 12:32:15.759: INFO: update-demo-nautilus-srfs5 is verified up and running STEP: using delete to clean up resources May 2 12:32:15.760: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-msxfn' May 2 12:32:15.867: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 2 12:32:15.867: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" May 2 12:32:15.867: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=e2e-tests-kubectl-msxfn' May 2 12:32:15.981: INFO: stderr: "No resources found.\n" May 2 12:32:15.981: INFO: stdout: "" May 2 12:32:15.981: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=e2e-tests-kubectl-msxfn -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 2 12:32:16.078: INFO: stderr: "" May 2 12:32:16.078: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 2 12:32:16.078: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-msxfn" for this suite. May 2 12:32:38.149: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 12:32:38.186: INFO: namespace: e2e-tests-kubectl-msxfn, resource: bindings, ignored listing per whitelist May 2 12:32:38.222: INFO: namespace e2e-tests-kubectl-msxfn deletion completed in 22.140416222s • [SLOW TEST:30.801 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 2 12:32:38.223: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74 STEP: Creating service test in namespace e2e-tests-statefulset-2b6jp [It] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Looking for a node to schedule stateful set and pod STEP: Creating pod with conflicting port in namespace e2e-tests-statefulset-2b6jp STEP: Creating statefulset with conflicting port in namespace e2e-tests-statefulset-2b6jp STEP: Waiting until pod test-pod will start running in namespace e2e-tests-statefulset-2b6jp STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace e2e-tests-statefulset-2b6jp May 2 12:32:42.424: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-2b6jp, name: ss-0, uid: 0411e0fa-8c71-11ea-99e8-0242ac110002, status phase: Pending. Waiting for statefulset controller to delete. May 2 12:32:51.245: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-2b6jp, name: ss-0, uid: 0411e0fa-8c71-11ea-99e8-0242ac110002, status phase: Failed. Waiting for statefulset controller to delete. May 2 12:32:51.256: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-2b6jp, name: ss-0, uid: 0411e0fa-8c71-11ea-99e8-0242ac110002, status phase: Failed. Waiting for statefulset controller to delete. May 2 12:32:51.323: INFO: Observed delete event for stateful pod ss-0 in namespace e2e-tests-statefulset-2b6jp STEP: Removing pod with conflicting port in namespace e2e-tests-statefulset-2b6jp STEP: Waiting when stateful pod ss-0 will be recreated in namespace e2e-tests-statefulset-2b6jp and will be in running state [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85 May 2 12:32:55.406: INFO: Deleting all statefulset in ns e2e-tests-statefulset-2b6jp May 2 12:32:55.409: INFO: Scaling statefulset ss to 0 May 2 12:33:05.423: INFO: Waiting for statefulset status.replicas updated to 0 May 2 12:33:05.426: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 2 12:33:05.441: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-statefulset-2b6jp" for this suite. May 2 12:33:13.455: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 12:33:13.506: INFO: namespace: e2e-tests-statefulset-2b6jp, resource: bindings, ignored listing per whitelist May 2 12:33:13.531: INFO: namespace e2e-tests-statefulset-2b6jp deletion completed in 8.085617468s • [SLOW TEST:35.308 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [k8s.io] Probing container should have monotonically increasing restart count [Slow][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 2 12:33:13.531: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] should have monotonically increasing restart count [Slow][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-jqk5p May 2 12:33:17.648: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-jqk5p STEP: checking the pod's current state and verifying that restartCount is present May 2 12:33:17.652: INFO: Initial restart count of pod liveness-http is 0 May 2 12:33:37.723: INFO: Restart count of pod e2e-tests-container-probe-jqk5p/liveness-http is now 1 (20.07127609s elapsed) May 2 12:33:57.769: INFO: Restart count of pod e2e-tests-container-probe-jqk5p/liveness-http is now 2 (40.117468549s elapsed) May 2 12:34:17.811: INFO: Restart count of pod e2e-tests-container-probe-jqk5p/liveness-http is now 3 (1m0.159476934s elapsed) May 2 12:34:37.854: INFO: Restart count of pod e2e-tests-container-probe-jqk5p/liveness-http is now 4 (1m20.202717375s elapsed) May 2 12:34:57.895: INFO: Restart count of pod e2e-tests-container-probe-jqk5p/liveness-http is now 5 (1m40.243604225s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 2 12:34:57.906: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-jqk5p" for this suite. May 2 12:35:03.947: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 12:35:03.978: INFO: namespace: e2e-tests-container-probe-jqk5p, resource: bindings, ignored listing per whitelist May 2 12:35:04.027: INFO: namespace e2e-tests-container-probe-jqk5p deletion completed in 6.090203408s • [SLOW TEST:110.496 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should have monotonically increasing restart count [Slow][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 2 12:35:04.027: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin May 2 12:35:04.142: INFO: Waiting up to 5m0s for pod "downwardapi-volume-590b5428-8c71-11ea-8045-0242ac110017" in namespace "e2e-tests-projected-m49zz" to be "success or failure" May 2 12:35:04.145: INFO: Pod "downwardapi-volume-590b5428-8c71-11ea-8045-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.952598ms May 2 12:35:06.150: INFO: Pod "downwardapi-volume-590b5428-8c71-11ea-8045-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007094895s May 2 12:35:08.154: INFO: Pod "downwardapi-volume-590b5428-8c71-11ea-8045-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011462187s STEP: Saw pod success May 2 12:35:08.154: INFO: Pod "downwardapi-volume-590b5428-8c71-11ea-8045-0242ac110017" satisfied condition "success or failure" May 2 12:35:08.156: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-590b5428-8c71-11ea-8045-0242ac110017 container client-container: STEP: delete the pod May 2 12:35:08.186: INFO: Waiting for pod downwardapi-volume-590b5428-8c71-11ea-8045-0242ac110017 to disappear May 2 12:35:08.193: INFO: Pod downwardapi-volume-590b5428-8c71-11ea-8045-0242ac110017 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 2 12:35:08.193: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-m49zz" for this suite. May 2 12:35:14.227: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 12:35:14.273: INFO: namespace: e2e-tests-projected-m49zz, resource: bindings, ignored listing per whitelist May 2 12:35:14.303: INFO: namespace e2e-tests-projected-m49zz deletion completed in 6.105667863s • [SLOW TEST:10.276 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-storage] HostPath should give a volume the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 2 12:35:14.303: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename hostpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37 [It] should give a volume the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test hostPath mode May 2 12:35:14.462: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "e2e-tests-hostpath-vs88m" to be "success or failure" May 2 12:35:14.482: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 20.457452ms May 2 12:35:16.581: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.118952776s May 2 12:35:18.617: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.154724552s STEP: Saw pod success May 2 12:35:18.617: INFO: Pod "pod-host-path-test" satisfied condition "success or failure" May 2 12:35:18.627: INFO: Trying to get logs from node hunter-worker pod pod-host-path-test container test-container-1: STEP: delete the pod May 2 12:35:18.655: INFO: Waiting for pod pod-host-path-test to disappear May 2 12:35:18.674: INFO: Pod pod-host-path-test no longer exists [AfterEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 2 12:35:18.674: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-hostpath-vs88m" for this suite. May 2 12:35:24.690: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 12:35:24.786: INFO: namespace: e2e-tests-hostpath-vs88m, resource: bindings, ignored listing per whitelist May 2 12:35:24.791: INFO: namespace e2e-tests-hostpath-vs88m deletion completed in 6.112849733s • [SLOW TEST:10.488 seconds] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34 should give a volume the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 2 12:35:24.791: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-656a8015-8c71-11ea-8045-0242ac110017 STEP: Creating a pod to test consume configMaps May 2 12:35:24.931: INFO: Waiting up to 5m0s for pod "pod-configmaps-656e9a51-8c71-11ea-8045-0242ac110017" in namespace "e2e-tests-configmap-xbzr6" to be "success or failure" May 2 12:35:24.948: INFO: Pod "pod-configmaps-656e9a51-8c71-11ea-8045-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 17.101103ms May 2 12:35:26.953: INFO: Pod "pod-configmaps-656e9a51-8c71-11ea-8045-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021759548s May 2 12:35:28.958: INFO: Pod "pod-configmaps-656e9a51-8c71-11ea-8045-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.026634087s STEP: Saw pod success May 2 12:35:28.958: INFO: Pod "pod-configmaps-656e9a51-8c71-11ea-8045-0242ac110017" satisfied condition "success or failure" May 2 12:35:28.960: INFO: Trying to get logs from node hunter-worker pod pod-configmaps-656e9a51-8c71-11ea-8045-0242ac110017 container configmap-volume-test: STEP: delete the pod May 2 12:35:28.980: INFO: Waiting for pod pod-configmaps-656e9a51-8c71-11ea-8045-0242ac110017 to disappear May 2 12:35:28.984: INFO: Pod pod-configmaps-656e9a51-8c71-11ea-8045-0242ac110017 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 2 12:35:28.985: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-xbzr6" for this suite. May 2 12:35:34.999: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 12:35:35.056: INFO: namespace: e2e-tests-configmap-xbzr6, resource: bindings, ignored listing per whitelist May 2 12:35:35.074: INFO: namespace e2e-tests-configmap-xbzr6 deletion completed in 6.086449632s • [SLOW TEST:10.283 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 2 12:35:35.075: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin May 2 12:35:35.202: INFO: Waiting up to 5m0s for pod "downwardapi-volume-6b8e1c40-8c71-11ea-8045-0242ac110017" in namespace "e2e-tests-downward-api-zcv6p" to be "success or failure" May 2 12:35:35.206: INFO: Pod "downwardapi-volume-6b8e1c40-8c71-11ea-8045-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 3.459543ms May 2 12:35:37.210: INFO: Pod "downwardapi-volume-6b8e1c40-8c71-11ea-8045-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008031495s May 2 12:35:39.214: INFO: Pod "downwardapi-volume-6b8e1c40-8c71-11ea-8045-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011427253s STEP: Saw pod success May 2 12:35:39.214: INFO: Pod "downwardapi-volume-6b8e1c40-8c71-11ea-8045-0242ac110017" satisfied condition "success or failure" May 2 12:35:39.216: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-6b8e1c40-8c71-11ea-8045-0242ac110017 container client-container: STEP: delete the pod May 2 12:35:39.278: INFO: Waiting for pod downwardapi-volume-6b8e1c40-8c71-11ea-8045-0242ac110017 to disappear May 2 12:35:39.281: INFO: Pod downwardapi-volume-6b8e1c40-8c71-11ea-8045-0242ac110017 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 2 12:35:39.281: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-zcv6p" for this suite. May 2 12:35:45.307: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 12:35:45.348: INFO: namespace: e2e-tests-downward-api-zcv6p, resource: bindings, ignored listing per whitelist May 2 12:35:45.388: INFO: namespace e2e-tests-downward-api-zcv6p deletion completed in 6.101194082s • [SLOW TEST:10.313 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 2 12:35:45.388: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-71b33107-8c71-11ea-8045-0242ac110017 STEP: Creating a pod to test consume configMaps May 2 12:35:45.521: INFO: Waiting up to 5m0s for pod "pod-configmaps-71b599b8-8c71-11ea-8045-0242ac110017" in namespace "e2e-tests-configmap-t95n2" to be "success or failure" May 2 12:35:45.525: INFO: Pod "pod-configmaps-71b599b8-8c71-11ea-8045-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 4.172056ms May 2 12:35:47.529: INFO: Pod "pod-configmaps-71b599b8-8c71-11ea-8045-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00806056s May 2 12:35:49.533: INFO: Pod "pod-configmaps-71b599b8-8c71-11ea-8045-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011702277s STEP: Saw pod success May 2 12:35:49.533: INFO: Pod "pod-configmaps-71b599b8-8c71-11ea-8045-0242ac110017" satisfied condition "success or failure" May 2 12:35:49.536: INFO: Trying to get logs from node hunter-worker pod pod-configmaps-71b599b8-8c71-11ea-8045-0242ac110017 container configmap-volume-test: STEP: delete the pod May 2 12:35:49.556: INFO: Waiting for pod pod-configmaps-71b599b8-8c71-11ea-8045-0242ac110017 to disappear May 2 12:35:49.561: INFO: Pod pod-configmaps-71b599b8-8c71-11ea-8045-0242ac110017 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 2 12:35:49.561: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-t95n2" for this suite. May 2 12:35:55.582: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 12:35:55.606: INFO: namespace: e2e-tests-configmap-t95n2, resource: bindings, ignored listing per whitelist May 2 12:35:55.661: INFO: namespace e2e-tests-configmap-t95n2 deletion completed in 6.09766468s • [SLOW TEST:10.273 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 2 12:35:55.661: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook May 2 12:36:03.858: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 2 12:36:03.878: INFO: Pod pod-with-prestop-http-hook still exists May 2 12:36:05.878: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 2 12:36:05.882: INFO: Pod pod-with-prestop-http-hook still exists May 2 12:36:07.878: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 2 12:36:07.882: INFO: Pod pod-with-prestop-http-hook still exists May 2 12:36:09.878: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 2 12:36:09.883: INFO: Pod pod-with-prestop-http-hook still exists May 2 12:36:11.878: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 2 12:36:11.883: INFO: Pod pod-with-prestop-http-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 2 12:36:11.890: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-gpjnt" for this suite. May 2 12:36:33.913: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 12:36:33.983: INFO: namespace: e2e-tests-container-lifecycle-hook-gpjnt, resource: bindings, ignored listing per whitelist May 2 12:36:33.985: INFO: namespace e2e-tests-container-lifecycle-hook-gpjnt deletion completed in 22.09087211s • [SLOW TEST:38.324 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40 should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 2 12:36:33.986: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-9h4wg STEP: creating a selector STEP: Creating the service pods in kubernetes May 2 12:36:34.123: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods May 2 12:36:58.215: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.67:8080/dial?request=hostName&protocol=http&host=10.244.1.47&port=8080&tries=1'] Namespace:e2e-tests-pod-network-test-9h4wg PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 2 12:36:58.215: INFO: >>> kubeConfig: /root/.kube/config I0502 12:36:58.249701 6 log.go:172] (0xc00032fce0) (0xc001cb14a0) Create stream I0502 12:36:58.249734 6 log.go:172] (0xc00032fce0) (0xc001cb14a0) Stream added, broadcasting: 1 I0502 12:36:58.252064 6 log.go:172] (0xc00032fce0) Reply frame received for 1 I0502 12:36:58.252122 6 log.go:172] (0xc00032fce0) (0xc001cb1540) Create stream I0502 12:36:58.252139 6 log.go:172] (0xc00032fce0) (0xc001cb1540) Stream added, broadcasting: 3 I0502 12:36:58.253289 6 log.go:172] (0xc00032fce0) Reply frame received for 3 I0502 12:36:58.253329 6 log.go:172] (0xc00032fce0) (0xc001cb1680) Create stream I0502 12:36:58.253342 6 log.go:172] (0xc00032fce0) (0xc001cb1680) Stream added, broadcasting: 5 I0502 12:36:58.254268 6 log.go:172] (0xc00032fce0) Reply frame received for 5 I0502 12:36:58.317846 6 log.go:172] (0xc00032fce0) Data frame received for 3 I0502 12:36:58.317874 6 log.go:172] (0xc001cb1540) (3) Data frame handling I0502 12:36:58.317892 6 log.go:172] (0xc001cb1540) (3) Data frame sent I0502 12:36:58.318710 6 log.go:172] (0xc00032fce0) Data frame received for 5 I0502 12:36:58.318748 6 log.go:172] (0xc001cb1680) (5) Data frame handling I0502 12:36:58.318984 6 log.go:172] (0xc00032fce0) Data frame received for 3 I0502 12:36:58.319006 6 log.go:172] (0xc001cb1540) (3) Data frame handling I0502 12:36:58.320399 6 log.go:172] (0xc00032fce0) Data frame received for 1 I0502 12:36:58.320413 6 log.go:172] (0xc001cb14a0) (1) Data frame handling I0502 12:36:58.320421 6 log.go:172] (0xc001cb14a0) (1) Data frame sent I0502 12:36:58.320430 6 log.go:172] (0xc00032fce0) (0xc001cb14a0) Stream removed, broadcasting: 1 I0502 12:36:58.320444 6 log.go:172] (0xc00032fce0) Go away received I0502 12:36:58.320548 6 log.go:172] (0xc00032fce0) (0xc001cb14a0) Stream removed, broadcasting: 1 I0502 12:36:58.320570 6 log.go:172] (0xc00032fce0) (0xc001cb1540) Stream removed, broadcasting: 3 I0502 12:36:58.320580 6 log.go:172] (0xc00032fce0) (0xc001cb1680) Stream removed, broadcasting: 5 May 2 12:36:58.320: INFO: Waiting for endpoints: map[] May 2 12:36:58.324: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.67:8080/dial?request=hostName&protocol=http&host=10.244.2.66&port=8080&tries=1'] Namespace:e2e-tests-pod-network-test-9h4wg PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 2 12:36:58.324: INFO: >>> kubeConfig: /root/.kube/config I0502 12:36:58.357396 6 log.go:172] (0xc000ae9970) (0xc0018d2c80) Create stream I0502 12:36:58.357436 6 log.go:172] (0xc000ae9970) (0xc0018d2c80) Stream added, broadcasting: 1 I0502 12:36:58.359762 6 log.go:172] (0xc000ae9970) Reply frame received for 1 I0502 12:36:58.359814 6 log.go:172] (0xc000ae9970) (0xc001aeb220) Create stream I0502 12:36:58.359829 6 log.go:172] (0xc000ae9970) (0xc001aeb220) Stream added, broadcasting: 3 I0502 12:36:58.360673 6 log.go:172] (0xc000ae9970) Reply frame received for 3 I0502 12:36:58.360711 6 log.go:172] (0xc000ae9970) (0xc0023648c0) Create stream I0502 12:36:58.360725 6 log.go:172] (0xc000ae9970) (0xc0023648c0) Stream added, broadcasting: 5 I0502 12:36:58.362033 6 log.go:172] (0xc000ae9970) Reply frame received for 5 I0502 12:36:58.433373 6 log.go:172] (0xc000ae9970) Data frame received for 3 I0502 12:36:58.433433 6 log.go:172] (0xc001aeb220) (3) Data frame handling I0502 12:36:58.433470 6 log.go:172] (0xc001aeb220) (3) Data frame sent I0502 12:36:58.434405 6 log.go:172] (0xc000ae9970) Data frame received for 3 I0502 12:36:58.434422 6 log.go:172] (0xc001aeb220) (3) Data frame handling I0502 12:36:58.434443 6 log.go:172] (0xc000ae9970) Data frame received for 5 I0502 12:36:58.434452 6 log.go:172] (0xc0023648c0) (5) Data frame handling I0502 12:36:58.435835 6 log.go:172] (0xc000ae9970) Data frame received for 1 I0502 12:36:58.435878 6 log.go:172] (0xc0018d2c80) (1) Data frame handling I0502 12:36:58.435912 6 log.go:172] (0xc0018d2c80) (1) Data frame sent I0502 12:36:58.435939 6 log.go:172] (0xc000ae9970) (0xc0018d2c80) Stream removed, broadcasting: 1 I0502 12:36:58.435967 6 log.go:172] (0xc000ae9970) Go away received I0502 12:36:58.436087 6 log.go:172] (0xc000ae9970) (0xc0018d2c80) Stream removed, broadcasting: 1 I0502 12:36:58.436102 6 log.go:172] (0xc000ae9970) (0xc001aeb220) Stream removed, broadcasting: 3 I0502 12:36:58.436125 6 log.go:172] (0xc000ae9970) (0xc0023648c0) Stream removed, broadcasting: 5 May 2 12:36:58.436: INFO: Waiting for endpoints: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 2 12:36:58.436: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pod-network-test-9h4wg" for this suite. May 2 12:37:22.458: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 12:37:22.490: INFO: namespace: e2e-tests-pod-network-test-9h4wg, resource: bindings, ignored listing per whitelist May 2 12:37:22.572: INFO: namespace e2e-tests-pod-network-test-9h4wg deletion completed in 24.131882521s • [SLOW TEST:48.586 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client May 2 12:37:22.572: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with secret that has name projected-secret-test-ab9d22c8-8c71-11ea-8045-0242ac110017 STEP: Creating a pod to test consume secrets May 2 12:37:22.813: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-aba1f4e1-8c71-11ea-8045-0242ac110017" in namespace "e2e-tests-projected-6r27f" to be "success or failure" May 2 12:37:22.822: INFO: Pod "pod-projected-secrets-aba1f4e1-8c71-11ea-8045-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 8.625987ms May 2 12:37:24.826: INFO: Pod "pod-projected-secrets-aba1f4e1-8c71-11ea-8045-0242ac110017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01293851s May 2 12:37:26.830: INFO: Pod "pod-projected-secrets-aba1f4e1-8c71-11ea-8045-0242ac110017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.017025047s STEP: Saw pod success May 2 12:37:26.830: INFO: Pod "pod-projected-secrets-aba1f4e1-8c71-11ea-8045-0242ac110017" satisfied condition "success or failure" May 2 12:37:26.834: INFO: Trying to get logs from node hunter-worker pod pod-projected-secrets-aba1f4e1-8c71-11ea-8045-0242ac110017 container projected-secret-volume-test: STEP: delete the pod May 2 12:37:26.854: INFO: Waiting for pod pod-projected-secrets-aba1f4e1-8c71-11ea-8045-0242ac110017 to disappear May 2 12:37:26.858: INFO: Pod pod-projected-secrets-aba1f4e1-8c71-11ea-8045-0242ac110017 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 May 2 12:37:26.858: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-6r27f" for this suite. May 2 12:37:32.873: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 2 12:37:32.889: INFO: namespace: e2e-tests-projected-6r27f, resource: bindings, ignored listing per whitelist May 2 12:37:32.948: INFO: namespace e2e-tests-projected-6r27f deletion completed in 6.086795977s • [SLOW TEST:10.376 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSMay 2 12:37:32.948: INFO: Running AfterSuite actions on all nodes May 2 12:37:32.948: INFO: Running AfterSuite actions on node 1 May 2 12:37:32.948: INFO: Skipping dumping logs from cluster Ran 200 of 2164 Specs in 6648.729 seconds SUCCESS! -- 200 Passed | 0 Failed | 0 Pending | 1964 Skipped PASS